Test Report: Docker_Linux_crio_arm64 21969

                    
                      ab0a8cfdd326918695f502976b3bdb249954a688:2025-11-23:42465
                    
                

Test fail (38/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.68
35 TestAddons/parallel/Registry 14.65
36 TestAddons/parallel/RegistryCreds 0.47
37 TestAddons/parallel/Ingress 144.08
38 TestAddons/parallel/InspektorGadget 6.28
39 TestAddons/parallel/MetricsServer 5.48
41 TestAddons/parallel/CSI 32.14
42 TestAddons/parallel/Headlamp 3.93
43 TestAddons/parallel/CloudSpanner 5.28
44 TestAddons/parallel/LocalPath 9.81
45 TestAddons/parallel/NvidiaDevicePlugin 6.31
46 TestAddons/parallel/Yakd 5.26
97 TestFunctional/parallel/ServiceCmdConnect 603.52
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.13
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.59
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
140 TestFunctional/parallel/ServiceCmd/DeployApp 600.88
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
153 TestFunctional/parallel/ServiceCmd/Format 0.51
154 TestFunctional/parallel/ServiceCmd/URL 0.53
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 413.25
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 3.32
191 TestJSONOutput/pause/Command 1.81
197 TestJSONOutput/unpause/Command 1.95
283 TestPause/serial/Pause 7.98
344 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.35
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.68
354 TestStartStop/group/old-k8s-version/serial/Pause 6.86
362 TestStartStop/group/no-preload/serial/Pause 6.48
366 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.02
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.58
376 TestStartStop/group/embed-certs/serial/Pause 6.83
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.47
390 TestStartStop/group/newest-cni/serial/Pause 6.33
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.17
x
+
TestAddons/serial/Volcano (0.68s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable volcano --alsologtostderr -v=1: exit status 11 (683.355958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:00:13.849643  291614 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:13.851275  291614 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:13.851296  291614 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:13.851303  291614 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:13.851626  291614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:00:13.851978  291614 mustload.go:66] Loading cluster: addons-984173
	I1123 09:00:13.852512  291614 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:13.852533  291614 addons.go:622] checking whether the cluster is paused
	I1123 09:00:13.852692  291614 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:13.852712  291614 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:00:13.853262  291614 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:00:13.875761  291614 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:13.875815  291614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:00:13.893661  291614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:00:14.001995  291614 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:00:14.002126  291614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:00:14.034758  291614 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:00:14.034784  291614 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:00:14.034791  291614 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:00:14.034795  291614 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:00:14.034798  291614 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:00:14.034802  291614 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:00:14.034805  291614 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:00:14.034808  291614 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:00:14.034812  291614 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:00:14.034820  291614 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:00:14.034823  291614 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:00:14.034827  291614 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:00:14.034829  291614 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:00:14.034832  291614 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:00:14.034836  291614 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:00:14.034844  291614 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:00:14.034848  291614 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:00:14.034853  291614 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:00:14.034856  291614 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:00:14.034859  291614 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:00:14.034864  291614 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:00:14.034867  291614 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:00:14.034870  291614 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:00:14.034873  291614 cri.go:89] found id: ""
	I1123 09:00:14.034926  291614 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:14.051889  291614 out.go:203] 
	W1123 09:00:14.055805  291614 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:00:14.055843  291614 out.go:285] * 
	* 
	W1123 09:00:14.439100  291614 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:00:14.442913  291614 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.68s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.038065ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-r7jl6" [30719118-851e-4542-a4f8-c89f68f6bd04] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003976995s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-xt9vl" [0e2ac94e-9a8b-4407-8901-cdf4a4fdfc8a] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003138842s
addons_test.go:392: (dbg) Run:  kubectl --context addons-984173 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-984173 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-984173 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.031742156s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 ip
2025/11/23 09:00:38 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable registry --alsologtostderr -v=1: exit status 11 (285.08876ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:00:38.228117  292132 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:38.229086  292132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:38.229152  292132 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:38.229174  292132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:38.229618  292132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:00:38.230103  292132 mustload.go:66] Loading cluster: addons-984173
	I1123 09:00:38.230708  292132 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:38.230750  292132 addons.go:622] checking whether the cluster is paused
	I1123 09:00:38.230948  292132 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:38.230997  292132 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:00:38.232461  292132 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:00:38.260311  292132 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:38.260386  292132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:00:38.279331  292132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:00:38.384482  292132 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:00:38.384578  292132 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:00:38.418778  292132 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:00:38.418800  292132 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:00:38.418806  292132 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:00:38.418810  292132 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:00:38.418813  292132 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:00:38.418817  292132 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:00:38.418821  292132 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:00:38.418824  292132 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:00:38.418827  292132 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:00:38.418840  292132 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:00:38.418844  292132 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:00:38.418847  292132 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:00:38.418850  292132 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:00:38.418853  292132 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:00:38.418856  292132 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:00:38.418861  292132 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:00:38.418865  292132 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:00:38.418869  292132 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:00:38.418873  292132 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:00:38.418876  292132 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:00:38.418881  292132 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:00:38.418891  292132 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:00:38.418895  292132 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:00:38.418898  292132 cri.go:89] found id: ""
	I1123 09:00:38.418953  292132 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:38.434682  292132 out.go:203] 
	W1123 09:00:38.437670  292132 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:00:38.437696  292132 out.go:285] * 
	* 
	W1123 09:00:38.444054  292132 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:00:38.446930  292132 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.65s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.560413ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-984173
addons_test.go:332: (dbg) Run:  kubectl --context addons-984173 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (253.669138ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:01:17.593563  294044 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:01:17.594543  294044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:01:17.594590  294044 out.go:374] Setting ErrFile to fd 2...
	I1123 09:01:17.594623  294044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:01:17.594993  294044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:01:17.595400  294044 mustload.go:66] Loading cluster: addons-984173
	I1123 09:01:17.596123  294044 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:01:17.596168  294044 addons.go:622] checking whether the cluster is paused
	I1123 09:01:17.596330  294044 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:01:17.596366  294044 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:01:17.597080  294044 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:01:17.616847  294044 ssh_runner.go:195] Run: systemctl --version
	I1123 09:01:17.616902  294044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:01:17.635794  294044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:01:17.739861  294044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:01:17.739949  294044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:01:17.773008  294044 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:01:17.773031  294044 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:01:17.773037  294044 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:01:17.773041  294044 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:01:17.773044  294044 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:01:17.773048  294044 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:01:17.773051  294044 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:01:17.773055  294044 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:01:17.773058  294044 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:01:17.773067  294044 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:01:17.773070  294044 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:01:17.773073  294044 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:01:17.773077  294044 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:01:17.773080  294044 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:01:17.773083  294044 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:01:17.773091  294044 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:01:17.773097  294044 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:01:17.773102  294044 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:01:17.773105  294044 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:01:17.773108  294044 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:01:17.773118  294044 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:01:17.773124  294044 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:01:17.773127  294044 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:01:17.773131  294044 cri.go:89] found id: ""
	I1123 09:01:17.773180  294044 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:01:17.788548  294044 out.go:203] 
	W1123 09:01:17.791669  294044 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:01:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:01:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:01:17.791700  294044 out.go:285] * 
	* 
	W1123 09:01:17.798177  294044 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:01:17.801191  294044 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-984173 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-984173 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-984173 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9f1b245a-5a08-47e4-8e0b-75267cdee6de] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [9f1b245a-5a08-47e4-8e0b-75267cdee6de] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.005272138s
I1123 09:01:08.142414  284904 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.169993356s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-984173 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-984173
helpers_test.go:243: (dbg) docker inspect addons-984173:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c",
	        "Created": "2025-11-23T08:57:57.310659194Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286067,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:57:57.383407496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c/hostname",
	        "HostsPath": "/var/lib/docker/containers/733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c/hosts",
	        "LogPath": "/var/lib/docker/containers/733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c/733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c-json.log",
	        "Name": "/addons-984173",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-984173:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-984173",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c",
	                "LowerDir": "/var/lib/docker/overlay2/c4e7a78fff0aea01be9146e8d2d65b224cce7cc0559b669da545caca15ec8f4b-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c4e7a78fff0aea01be9146e8d2d65b224cce7cc0559b669da545caca15ec8f4b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c4e7a78fff0aea01be9146e8d2d65b224cce7cc0559b669da545caca15ec8f4b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c4e7a78fff0aea01be9146e8d2d65b224cce7cc0559b669da545caca15ec8f4b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-984173",
	                "Source": "/var/lib/docker/volumes/addons-984173/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-984173",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-984173",
	                "name.minikube.sigs.k8s.io": "addons-984173",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e96523847ea90da0f314badfbe09f857cd905e9b110f4ad5c2cc3e84f3a93afa",
	            "SandboxKey": "/var/run/docker/netns/e96523847ea9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-984173": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:ea:74:47:83:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c5904d4a6f6f7099b2f69579ce698bfce4b7f9c7f43969a8d6c2e1da088445cb",
	                    "EndpointID": "44478c4b734361e96f3844242dd897b9df5f9c033de95bdb6d9525ca1c7409ea",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-984173",
	                        "733ef088474c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-984173 -n addons-984173
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-984173 logs -n 25: (1.407619683s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-864519                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-864519 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ start   │ --download-only -p binary-mirror-135438 --alsologtostderr --binary-mirror http://127.0.0.1:42341 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-135438   │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ delete  │ -p binary-mirror-135438                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-135438   │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable dashboard -p addons-984173                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ addons  │ disable dashboard -p addons-984173                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ start   │ -p addons-984173 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 09:00 UTC │
	│ addons  │ addons-984173 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ addons  │ addons-984173 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ addons  │ addons-984173 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ addons  │ addons-984173 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ ip      │ addons-984173 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ addons  │ addons-984173 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ addons  │ addons-984173 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ addons  │ enable headlamp -p addons-984173 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ ssh     │ addons-984173 ssh cat /opt/local-path-provisioner/pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ addons  │ addons-984173 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ addons  │ addons-984173 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ addons  │ addons-984173 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ addons  │ addons-984173 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ ssh     │ addons-984173 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │                     │
	│ addons  │ addons-984173 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │                     │
	│ addons  │ addons-984173 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-984173                                                                                                                                                                                                                                                                                                                                                                                           │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ addons  │ addons-984173 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │                     │
	│ ip      │ addons-984173 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:03 UTC │ 23 Nov 25 09:03 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:57:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:57:33.078797  285663 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:57:33.078919  285663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:57:33.078931  285663 out.go:374] Setting ErrFile to fd 2...
	I1123 08:57:33.078942  285663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:57:33.079589  285663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 08:57:33.080160  285663 out.go:368] Setting JSON to false
	I1123 08:57:33.081017  285663 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6002,"bootTime":1763882251,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:57:33.081112  285663 start.go:143] virtualization:  
	I1123 08:57:33.084407  285663 out.go:179] * [addons-984173] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:57:33.088182  285663 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:57:33.088263  285663 notify.go:221] Checking for updates...
	I1123 08:57:33.093915  285663 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:57:33.096903  285663 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 08:57:33.099733  285663 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 08:57:33.102573  285663 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:57:33.105358  285663 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:57:33.108485  285663 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:57:33.133953  285663 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:57:33.134088  285663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:57:33.194605  285663 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 08:57:33.186080456 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:57:33.194704  285663 docker.go:319] overlay module found
	I1123 08:57:33.197798  285663 out.go:179] * Using the docker driver based on user configuration
	I1123 08:57:33.200566  285663 start.go:309] selected driver: docker
	I1123 08:57:33.200587  285663 start.go:927] validating driver "docker" against <nil>
	I1123 08:57:33.200602  285663 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:57:33.201305  285663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:57:33.253342  285663 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 08:57:33.244585815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:57:33.253522  285663 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:57:33.253760  285663 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:57:33.256696  285663 out.go:179] * Using Docker driver with root privileges
	I1123 08:57:33.259533  285663 cni.go:84] Creating CNI manager for ""
	I1123 08:57:33.259602  285663 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:57:33.259616  285663 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:57:33.259693  285663 start.go:353] cluster config:
	{Name:addons-984173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-984173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1123 08:57:33.262749  285663 out.go:179] * Starting "addons-984173" primary control-plane node in "addons-984173" cluster
	I1123 08:57:33.265444  285663 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:57:33.268252  285663 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:57:33.271034  285663 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:57:33.271079  285663 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 08:57:33.271094  285663 cache.go:65] Caching tarball of preloaded images
	I1123 08:57:33.271108  285663 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:57:33.271184  285663 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:57:33.271195  285663 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:57:33.271561  285663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/config.json ...
	I1123 08:57:33.271594  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/config.json: {Name:mk7616ad40d907a35dda8e69123013a3c465e5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:33.286349  285663 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:57:33.286492  285663 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 08:57:33.286530  285663 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 08:57:33.286539  285663 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 08:57:33.286546  285663 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 08:57:33.286551  285663 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1123 08:57:51.152029  285663 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1123 08:57:51.152071  285663 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:57:51.152116  285663 start.go:360] acquireMachinesLock for addons-984173: {Name:mkae3618c5c75bc99801f8654bd1771081e55a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:57:51.152866  285663 start.go:364] duration metric: took 722.395µs to acquireMachinesLock for "addons-984173"
	I1123 08:57:51.152908  285663 start.go:93] Provisioning new machine with config: &{Name:addons-984173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-984173 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:57:51.152989  285663 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:57:51.156395  285663 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1123 08:57:51.156685  285663 start.go:159] libmachine.API.Create for "addons-984173" (driver="docker")
	I1123 08:57:51.156734  285663 client.go:173] LocalClient.Create starting
	I1123 08:57:51.156869  285663 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem
	I1123 08:57:51.477283  285663 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem
	I1123 08:57:51.690063  285663 cli_runner.go:164] Run: docker network inspect addons-984173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:57:51.705839  285663 cli_runner.go:211] docker network inspect addons-984173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:57:51.705918  285663 network_create.go:284] running [docker network inspect addons-984173] to gather additional debugging logs...
	I1123 08:57:51.705937  285663 cli_runner.go:164] Run: docker network inspect addons-984173
	W1123 08:57:51.721336  285663 cli_runner.go:211] docker network inspect addons-984173 returned with exit code 1
	I1123 08:57:51.721382  285663 network_create.go:287] error running [docker network inspect addons-984173]: docker network inspect addons-984173: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-984173 not found
	I1123 08:57:51.721395  285663 network_create.go:289] output of [docker network inspect addons-984173]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-984173 not found
	
	** /stderr **
	I1123 08:57:51.721529  285663 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:57:51.737377  285663 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b0e790}
	I1123 08:57:51.737488  285663 network_create.go:124] attempt to create docker network addons-984173 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1123 08:57:51.737544  285663 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-984173 addons-984173
	I1123 08:57:51.808983  285663 network_create.go:108] docker network addons-984173 192.168.49.0/24 created
	I1123 08:57:51.809016  285663 kic.go:121] calculated static IP "192.168.49.2" for the "addons-984173" container
	I1123 08:57:51.809103  285663 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:57:51.828920  285663 cli_runner.go:164] Run: docker volume create addons-984173 --label name.minikube.sigs.k8s.io=addons-984173 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:57:51.846845  285663 oci.go:103] Successfully created a docker volume addons-984173
	I1123 08:57:51.846938  285663 cli_runner.go:164] Run: docker run --rm --name addons-984173-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-984173 --entrypoint /usr/bin/test -v addons-984173:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:57:52.817450  285663 oci.go:107] Successfully prepared a docker volume addons-984173
	I1123 08:57:52.817516  285663 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:57:52.817533  285663 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:57:52.817593  285663 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-984173:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:57:57.233709  285663 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-984173:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.416059913s)
	I1123 08:57:57.233744  285663 kic.go:203] duration metric: took 4.416208969s to extract preloaded images to volume ...
	W1123 08:57:57.233877  285663 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:57:57.233986  285663 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:57:57.296085  285663 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-984173 --name addons-984173 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-984173 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-984173 --network addons-984173 --ip 192.168.49.2 --volume addons-984173:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:57:57.598227  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Running}}
	I1123 08:57:57.617756  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:57:57.635904  285663 cli_runner.go:164] Run: docker exec addons-984173 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:57:57.684311  285663 oci.go:144] the created container "addons-984173" has a running status.
	I1123 08:57:57.684338  285663 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa...
	I1123 08:57:57.833286  285663 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:57:57.854677  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:57:57.877809  285663 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:57:57.877828  285663 kic_runner.go:114] Args: [docker exec --privileged addons-984173 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:57:57.948042  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:57:57.971140  285663 machine.go:94] provisionDockerMachine start ...
	I1123 08:57:57.971238  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:57:58.000953  285663 main.go:143] libmachine: Using SSH client type: native
	I1123 08:57:58.001315  285663 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1123 08:57:58.001333  285663 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:57:58.002321  285663 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:58:01.153621  285663 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-984173
	
	I1123 08:58:01.153650  285663 ubuntu.go:182] provisioning hostname "addons-984173"
	I1123 08:58:01.153730  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:01.173486  285663 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:01.173810  285663 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1123 08:58:01.173821  285663 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-984173 && echo "addons-984173" | sudo tee /etc/hostname
	I1123 08:58:01.334933  285663 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-984173
	
	I1123 08:58:01.335028  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:01.351920  285663 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:01.352257  285663 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1123 08:58:01.352280  285663 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-984173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-984173/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-984173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:58:01.505572  285663 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:58:01.505597  285663 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 08:58:01.505623  285663 ubuntu.go:190] setting up certificates
	I1123 08:58:01.505633  285663 provision.go:84] configureAuth start
	I1123 08:58:01.505695  285663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-984173
	I1123 08:58:01.522222  285663 provision.go:143] copyHostCerts
	I1123 08:58:01.522309  285663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 08:58:01.522441  285663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 08:58:01.522505  285663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 08:58:01.522565  285663 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.addons-984173 san=[127.0.0.1 192.168.49.2 addons-984173 localhost minikube]
	I1123 08:58:01.678063  285663 provision.go:177] copyRemoteCerts
	I1123 08:58:01.678137  285663 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:58:01.678185  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:01.703822  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:01.813455  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:58:01.831085  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:58:01.848085  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 08:58:01.865485  285663 provision.go:87] duration metric: took 359.826729ms to configureAuth
	I1123 08:58:01.865559  285663 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:58:01.865799  285663 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:01.865933  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:01.882699  285663 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:01.883020  285663 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1123 08:58:01.883044  285663 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:58:02.177564  285663 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:58:02.177584  285663 machine.go:97] duration metric: took 4.206426115s to provisionDockerMachine
	I1123 08:58:02.177595  285663 client.go:176] duration metric: took 11.020851334s to LocalClient.Create
	I1123 08:58:02.177607  285663 start.go:167] duration metric: took 11.020926329s to libmachine.API.Create "addons-984173"
	I1123 08:58:02.177615  285663 start.go:293] postStartSetup for "addons-984173" (driver="docker")
	I1123 08:58:02.177625  285663 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:58:02.177706  285663 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:58:02.177757  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:02.194870  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:02.301273  285663 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:58:02.304568  285663 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:58:02.304604  285663 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:58:02.304616  285663 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 08:58:02.304684  285663 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 08:58:02.304711  285663 start.go:296] duration metric: took 127.089168ms for postStartSetup
	I1123 08:58:02.305028  285663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-984173
	I1123 08:58:02.321849  285663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/config.json ...
	I1123 08:58:02.322157  285663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:58:02.322207  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:02.338705  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:02.438210  285663 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:58:02.442777  285663 start.go:128] duration metric: took 11.289772451s to createHost
	I1123 08:58:02.442804  285663 start.go:83] releasing machines lock for "addons-984173", held for 11.289918076s
	I1123 08:58:02.442872  285663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-984173
	I1123 08:58:02.459980  285663 ssh_runner.go:195] Run: cat /version.json
	I1123 08:58:02.460000  285663 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:58:02.460028  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:02.460063  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:02.482905  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:02.502653  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:02.680113  285663 ssh_runner.go:195] Run: systemctl --version
	I1123 08:58:02.686449  285663 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:58:02.721441  285663 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:58:02.725741  285663 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:58:02.725811  285663 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:58:02.753761  285663 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:58:02.753787  285663 start.go:496] detecting cgroup driver to use...
	I1123 08:58:02.753820  285663 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:58:02.753871  285663 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:58:02.771717  285663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:58:02.784302  285663 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:58:02.784365  285663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:58:02.802129  285663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:58:02.820708  285663 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:58:02.942746  285663 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:58:03.074227  285663 docker.go:234] disabling docker service ...
	I1123 08:58:03.074295  285663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:58:03.095266  285663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:58:03.108522  285663 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:58:03.233781  285663 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:58:03.356432  285663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:58:03.370046  285663 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:58:03.383367  285663 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:58:03.383468  285663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.391837  285663 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:58:03.391927  285663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.400908  285663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.409441  285663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.418212  285663 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:58:03.426437  285663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.434935  285663 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.447797  285663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.456620  285663 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:58:03.463992  285663 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:58:03.471293  285663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:03.584135  285663 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:58:03.773705  285663 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:58:03.773806  285663 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:58:03.777765  285663 start.go:564] Will wait 60s for crictl version
	I1123 08:58:03.777831  285663 ssh_runner.go:195] Run: which crictl
	I1123 08:58:03.781331  285663 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:58:03.807062  285663 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:58:03.807243  285663 ssh_runner.go:195] Run: crio --version
	I1123 08:58:03.834377  285663 ssh_runner.go:195] Run: crio --version
	I1123 08:58:03.862971  285663 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:58:03.865789  285663 cli_runner.go:164] Run: docker network inspect addons-984173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:58:03.881968  285663 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 08:58:03.885818  285663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:58:03.895609  285663 kubeadm.go:884] updating cluster {Name:addons-984173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-984173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:58:03.895724  285663 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:58:03.895785  285663 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:58:03.928533  285663 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:58:03.928555  285663 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:58:03.928613  285663 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:58:03.954231  285663 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:58:03.954254  285663 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:58:03.954262  285663 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 08:58:03.954364  285663 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-984173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-984173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:58:03.954443  285663 ssh_runner.go:195] Run: crio config
	I1123 08:58:04.026096  285663 cni.go:84] Creating CNI manager for ""
	I1123 08:58:04.026142  285663 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:58:04.026167  285663 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:58:04.026192  285663 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-984173 NodeName:addons-984173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:58:04.026320  285663 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-984173"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:58:04.026393  285663 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:58:04.034005  285663 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:58:04.034075  285663 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:58:04.041525  285663 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 08:58:04.053814  285663 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:58:04.066362  285663 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1123 08:58:04.078686  285663 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:58:04.082192  285663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:58:04.091435  285663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:04.205748  285663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:58:04.222941  285663 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173 for IP: 192.168.49.2
	I1123 08:58:04.222963  285663 certs.go:195] generating shared ca certs ...
	I1123 08:58:04.222978  285663 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.223173  285663 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 08:58:04.326775  285663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt ...
	I1123 08:58:04.326809  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt: {Name:mk7b2cb380eb2c6d9b4c557b53e038640e948f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.327661  285663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key ...
	I1123 08:58:04.327678  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key: {Name:mka195bd406baa7297b08ee2229e68eb23e70ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.327767  285663 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 08:58:04.400853  285663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt ...
	I1123 08:58:04.400881  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt: {Name:mka3614bd2fc07777b02ee7c7a59e444e85c8007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.401044  285663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key ...
	I1123 08:58:04.401056  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key: {Name:mk1d754cf0fedaac87b2d7052e74b68fdf7d3925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.401137  285663 certs.go:257] generating profile certs ...
	I1123 08:58:04.401197  285663 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.key
	I1123 08:58:04.401213  285663 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt with IP's: []
	I1123 08:58:04.515463  285663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt ...
	I1123 08:58:04.515503  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: {Name:mk0cb577cba32d0ba0e8ed99eb58ab8036539ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.515733  285663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.key ...
	I1123 08:58:04.515752  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.key: {Name:mk4ee1cf36ada24c2eccc2269ed0c5e100c87767 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.516504  285663 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.key.05f7d282
	I1123 08:58:04.516529  285663 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.crt.05f7d282 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1123 08:58:04.663924  285663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.crt.05f7d282 ...
	I1123 08:58:04.663956  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.crt.05f7d282: {Name:mk29ec3c06ffa94819688b6c04a4da23123ccd54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.664139  285663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.key.05f7d282 ...
	I1123 08:58:04.664153  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.key.05f7d282: {Name:mkc23d3e4fba0a94d7fbb37262ce3a6a61cad94b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.664249  285663 certs.go:382] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.crt.05f7d282 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.crt
	I1123 08:58:04.664326  285663 certs.go:386] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.key.05f7d282 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.key
	I1123 08:58:04.664396  285663 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.key
	I1123 08:58:04.664415  285663 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.crt with IP's: []
	I1123 08:58:04.852251  285663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.crt ...
	I1123 08:58:04.852281  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.crt: {Name:mk731cf009e1fac35f29d2f20663f6f28ce6a2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.852458  285663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.key ...
	I1123 08:58:04.852472  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.key: {Name:mkf308a93be8ab758fe161e4dfbaa4620498ab19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.853227  285663 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:58:04.853276  285663 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:58:04.853310  285663 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:58:04.853342  285663 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 08:58:04.853963  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:58:04.871661  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:58:04.890152  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:58:04.909743  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:58:04.928232  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 08:58:04.946069  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:58:04.964103  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:58:04.981854  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:58:05.002214  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:58:05.022167  285663 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:58:05.036026  285663 ssh_runner.go:195] Run: openssl version
	I1123 08:58:05.042440  285663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:58:05.051057  285663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:58:05.054975  285663 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:58:05.055043  285663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:58:05.097000  285663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:58:05.106198  285663 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:58:05.111018  285663 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:58:05.111123  285663 kubeadm.go:401] StartCluster: {Name:addons-984173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-984173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:58:05.111224  285663 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:58:05.111301  285663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:58:05.140676  285663 cri.go:89] found id: ""
	I1123 08:58:05.140795  285663 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:58:05.150943  285663 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:58:05.159253  285663 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:58:05.159348  285663 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:58:05.167358  285663 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:58:05.167379  285663 kubeadm.go:158] found existing configuration files:
	
	I1123 08:58:05.167453  285663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:58:05.175361  285663 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:58:05.175425  285663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:58:05.182863  285663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:58:05.190546  285663 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:58:05.190613  285663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:58:05.198229  285663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:58:05.205647  285663 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:58:05.205712  285663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:58:05.212960  285663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:58:05.220411  285663 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:58:05.220488  285663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:58:05.228126  285663 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:58:05.292752  285663 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 08:58:05.293041  285663 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:58:05.362332  285663 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:58:23.733845  285663 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:58:23.733904  285663 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:58:23.734011  285663 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:58:23.734086  285663 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:58:23.734134  285663 kubeadm.go:319] OS: Linux
	I1123 08:58:23.734184  285663 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:58:23.734235  285663 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:58:23.734287  285663 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:58:23.734335  285663 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:58:23.734386  285663 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:58:23.734442  285663 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:58:23.734491  285663 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:58:23.734543  285663 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:58:23.734592  285663 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:58:23.734667  285663 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:58:23.734771  285663 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:58:23.734866  285663 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:58:23.734933  285663 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:58:23.737900  285663 out.go:252]   - Generating certificates and keys ...
	I1123 08:58:23.737987  285663 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:58:23.738060  285663 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:58:23.738135  285663 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:58:23.738196  285663 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:58:23.738260  285663 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:58:23.738313  285663 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:58:23.738371  285663 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:58:23.738495  285663 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-984173 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 08:58:23.738551  285663 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:58:23.738669  285663 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-984173 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 08:58:23.738737  285663 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:58:23.738803  285663 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:58:23.738850  285663 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:58:23.738909  285663 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:58:23.738963  285663 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:58:23.739028  285663 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:58:23.739087  285663 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:58:23.739152  285663 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:58:23.739210  285663 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:58:23.739295  285663 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:58:23.739364  285663 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:58:23.744095  285663 out.go:252]   - Booting up control plane ...
	I1123 08:58:23.744234  285663 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:58:23.744346  285663 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:58:23.744424  285663 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:58:23.744565  285663 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:58:23.744673  285663 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:58:23.744784  285663 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:58:23.744892  285663 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:58:23.744961  285663 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:58:23.745130  285663 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:58:23.745250  285663 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:58:23.745316  285663 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.003522866s
	I1123 08:58:23.745477  285663 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:58:23.745568  285663 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1123 08:58:23.745683  285663 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:58:23.745805  285663 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:58:23.745897  285663 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.869430516s
	I1123 08:58:23.745974  285663 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.538959716s
	I1123 08:58:23.746047  285663 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501807408s
	I1123 08:58:23.746216  285663 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:58:23.746382  285663 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:58:23.746445  285663 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:58:23.746667  285663 kubeadm.go:319] [mark-control-plane] Marking the node addons-984173 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:58:23.746734  285663 kubeadm.go:319] [bootstrap-token] Using token: tb3p7g.n9zph3ueg2zzg57t
	I1123 08:58:23.749849  285663 out.go:252]   - Configuring RBAC rules ...
	I1123 08:58:23.749987  285663 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:58:23.750085  285663 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:58:23.750272  285663 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:58:23.750426  285663 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:58:23.750552  285663 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:58:23.750639  285663 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:58:23.750754  285663 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:58:23.750797  285663 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:58:23.750842  285663 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:58:23.750846  285663 kubeadm.go:319] 
	I1123 08:58:23.750905  285663 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:58:23.750909  285663 kubeadm.go:319] 
	I1123 08:58:23.750992  285663 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:58:23.750997  285663 kubeadm.go:319] 
	I1123 08:58:23.751022  285663 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:58:23.751080  285663 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:58:23.751131  285663 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:58:23.751134  285663 kubeadm.go:319] 
	I1123 08:58:23.751188  285663 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:58:23.751192  285663 kubeadm.go:319] 
	I1123 08:58:23.751239  285663 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:58:23.751242  285663 kubeadm.go:319] 
	I1123 08:58:23.751294  285663 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:58:23.751369  285663 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:58:23.751437  285663 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:58:23.751440  285663 kubeadm.go:319] 
	I1123 08:58:23.751524  285663 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:58:23.751601  285663 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:58:23.751604  285663 kubeadm.go:319] 
	I1123 08:58:23.751690  285663 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tb3p7g.n9zph3ueg2zzg57t \
	I1123 08:58:23.751793  285663 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e \
	I1123 08:58:23.751813  285663 kubeadm.go:319] 	--control-plane 
	I1123 08:58:23.751817  285663 kubeadm.go:319] 
	I1123 08:58:23.751901  285663 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:58:23.751904  285663 kubeadm.go:319] 
	I1123 08:58:23.751986  285663 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tb3p7g.n9zph3ueg2zzg57t \
	I1123 08:58:23.752102  285663 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e 
	I1123 08:58:23.752112  285663 cni.go:84] Creating CNI manager for ""
	I1123 08:58:23.752119  285663 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:58:23.755219  285663 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:58:23.758178  285663 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:58:23.769205  285663 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:58:23.769225  285663 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:58:23.781887  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:58:24.073519  285663 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:58:24.073643  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:24.073734  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-984173 minikube.k8s.io/updated_at=2025_11_23T08_58_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=addons-984173 minikube.k8s.io/primary=true
	I1123 08:58:24.297852  285663 ops.go:34] apiserver oom_adj: -16
	I1123 08:58:24.297964  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:24.798115  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:25.298663  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:25.798673  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:26.298095  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:26.798734  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:27.298615  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:27.798096  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:28.298376  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:28.414783  285663 kubeadm.go:1114] duration metric: took 4.34117504s to wait for elevateKubeSystemPrivileges
	I1123 08:58:28.414811  285663 kubeadm.go:403] duration metric: took 23.303700743s to StartCluster
	I1123 08:58:28.414828  285663 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:28.415597  285663 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 08:58:28.415991  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:28.416189  285663 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:58:28.416357  285663 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:58:28.416619  285663 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:28.416655  285663 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 08:58:28.416741  285663 addons.go:70] Setting yakd=true in profile "addons-984173"
	I1123 08:58:28.416755  285663 addons.go:239] Setting addon yakd=true in "addons-984173"
	I1123 08:58:28.416776  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.417265  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.417889  285663 addons.go:70] Setting inspektor-gadget=true in profile "addons-984173"
	I1123 08:58:28.417917  285663 addons.go:239] Setting addon inspektor-gadget=true in "addons-984173"
	I1123 08:58:28.417942  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.418146  285663 addons.go:70] Setting metrics-server=true in profile "addons-984173"
	I1123 08:58:28.418163  285663 addons.go:239] Setting addon metrics-server=true in "addons-984173"
	I1123 08:58:28.418184  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.418385  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.418600  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.422285  285663 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-984173"
	I1123 08:58:28.422320  285663 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-984173"
	I1123 08:58:28.422352  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.422811  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423260  285663 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-984173"
	I1123 08:58:28.424590  285663 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-984173"
	I1123 08:58:28.424750  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.426833  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423417  285663 addons.go:70] Setting cloud-spanner=true in profile "addons-984173"
	I1123 08:58:28.430054  285663 addons.go:239] Setting addon cloud-spanner=true in "addons-984173"
	I1123 08:58:28.430104  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.430538  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423428  285663 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-984173"
	I1123 08:58:28.446564  285663 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-984173"
	I1123 08:58:28.446599  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.423438  285663 addons.go:70] Setting default-storageclass=true in profile "addons-984173"
	I1123 08:58:28.446723  285663 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-984173"
	I1123 08:58:28.446998  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.454558  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423444  285663 addons.go:70] Setting gcp-auth=true in profile "addons-984173"
	I1123 08:58:28.465315  285663 mustload.go:66] Loading cluster: addons-984173
	I1123 08:58:28.465596  285663 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:28.465866  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423451  285663 addons.go:70] Setting ingress=true in profile "addons-984173"
	I1123 08:58:28.490838  285663 addons.go:239] Setting addon ingress=true in "addons-984173"
	I1123 08:58:28.490886  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.491351  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423457  285663 addons.go:70] Setting ingress-dns=true in profile "addons-984173"
	I1123 08:58:28.502236  285663 addons.go:239] Setting addon ingress-dns=true in "addons-984173"
	I1123 08:58:28.502293  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.502762  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423502  285663 out.go:179] * Verifying Kubernetes components...
	I1123 08:58:28.424461  285663 addons.go:70] Setting volcano=true in profile "addons-984173"
	I1123 08:58:28.571914  285663 addons.go:239] Setting addon volcano=true in "addons-984173"
	I1123 08:58:28.571966  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.572447  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.579454  285663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:28.424472  285663 addons.go:70] Setting registry=true in profile "addons-984173"
	I1123 08:58:28.589296  285663 addons.go:239] Setting addon registry=true in "addons-984173"
	I1123 08:58:28.589340  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.424497  285663 addons.go:70] Setting registry-creds=true in profile "addons-984173"
	I1123 08:58:28.589611  285663 addons.go:239] Setting addon registry-creds=true in "addons-984173"
	I1123 08:58:28.589634  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.590084  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.603230  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.424506  285663 addons.go:70] Setting storage-provisioner=true in profile "addons-984173"
	I1123 08:58:28.603627  285663 addons.go:239] Setting addon storage-provisioner=true in "addons-984173"
	I1123 08:58:28.603662  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.604070  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.621895  285663 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 08:58:28.424511  285663 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-984173"
	I1123 08:58:28.624195  285663 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-984173"
	I1123 08:58:28.624524  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.627409  285663 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 08:58:28.627457  285663 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 08:58:28.627523  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.641830  285663 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 08:58:28.645129  285663 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 08:58:28.645153  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 08:58:28.645217  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.424531  285663 addons.go:70] Setting volumesnapshots=true in profile "addons-984173"
	I1123 08:58:28.649010  285663 addons.go:239] Setting addon volumesnapshots=true in "addons-984173"
	I1123 08:58:28.649049  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.649528  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.663147  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 08:58:28.663211  285663 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 08:58:28.666625  285663 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 08:58:28.666654  285663 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 08:58:28.666736  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.672327  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 08:58:28.675281  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 08:58:28.679870  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 08:58:28.684589  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 08:58:28.687620  285663 addons.go:239] Setting addon default-storageclass=true in "addons-984173"
	I1123 08:58:28.687664  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.688129  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.693453  285663 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1123 08:58:28.693581  285663 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 08:58:28.693640  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.736496  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 08:58:28.741689  285663 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 08:58:28.741710  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 08:58:28.741776  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.751936  285663 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 08:58:28.752016  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 08:58:28.752111  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.772508  285663 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 08:58:28.780611  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 08:58:28.717367  285663 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 08:58:28.816367  285663 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 08:58:28.823199  285663 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 08:58:28.823280  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 08:58:28.823395  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.830531  285663 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1123 08:58:28.830747  285663 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 08:58:28.830760  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 08:58:28.830904  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	W1123 08:58:28.839217  285663 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 08:58:28.855179  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 08:58:28.855402  285663 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:58:28.855529  285663 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 08:58:28.855543  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 08:58:28.855606  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.858375  285663 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:58:28.866863  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 08:58:28.866960  285663 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 08:58:28.867065  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.889229  285663 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 08:58:28.892972  285663 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 08:58:28.894371  285663 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:58:28.899613  285663 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 08:58:28.899637  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 08:58:28.899721  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.903125  285663 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 08:58:28.903194  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 08:58:28.903280  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.938187  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:28.939232  285663 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-984173"
	I1123 08:58:28.939270  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.939705  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.947211  285663 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:58:28.947232  285663 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:58:28.947289  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.949907  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:28.953917  285663 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:28.956953  285663 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:58:28.956973  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:58:28.957034  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.978097  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 08:58:28.982220  285663 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 08:58:28.982249  285663 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 08:58:28.982321  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:29.034009  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.034866  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.035782  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.038695  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.051871  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.072797  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.085464  285663 out.go:179]   - Using image docker.io/busybox:stable
	I1123 08:58:29.089696  285663 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 08:58:29.092603  285663 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 08:58:29.092625  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 08:58:29.092694  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:29.096231  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.105894  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.121307  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.146036  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.166544  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.168119  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	W1123 08:58:29.178365  285663 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 08:58:29.178405  285663 retry.go:31] will retry after 172.408471ms: ssh: handshake failed: EOF
	I1123 08:58:29.183804  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.192462  285663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1123 08:58:29.193934  285663 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 08:58:29.193957  285663 retry.go:31] will retry after 312.038601ms: ssh: handshake failed: EOF
	I1123 08:58:29.714677  285663 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 08:58:29.714778  285663 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 08:58:29.747151  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 08:58:29.767970  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 08:58:29.775979  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:58:29.780947  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 08:58:29.806715  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 08:58:29.812905  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 08:58:29.874213  285663 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 08:58:29.874325  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 08:58:29.924368  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 08:58:29.946772  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 08:58:29.946875  285663 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 08:58:29.971357  285663 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 08:58:29.971423  285663 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 08:58:29.987202  285663 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 08:58:29.987295  285663 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 08:58:30.002470  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 08:58:30.013198  285663 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 08:58:30.013292  285663 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 08:58:30.037857  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 08:58:30.120895  285663 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 08:58:30.120980  285663 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 08:58:30.165123  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:58:30.201017  285663 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 08:58:30.201046  285663 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 08:58:30.203525  285663 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 08:58:30.203550  285663 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 08:58:30.206144  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 08:58:30.206169  285663 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 08:58:30.236977  285663 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 08:58:30.237003  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 08:58:30.342026  285663 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:58:30.342113  285663 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 08:58:30.394444  285663 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 08:58:30.394518  285663 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 08:58:30.425931  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 08:58:30.426004  285663 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 08:58:30.429342  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 08:58:30.429440  285663 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 08:58:30.445521  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:58:30.454811  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 08:58:30.642967  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 08:58:30.643046  285663 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 08:58:30.658733  285663 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:58:30.658803  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 08:58:30.688610  285663 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 08:58:30.688638  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 08:58:30.719483  285663 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.860999878s)
	I1123 08:58:30.719517  285663 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1123 08:58:30.720471  285663 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.527982474s)
	I1123 08:58:30.721071  285663 node_ready.go:35] waiting up to 6m0s for node "addons-984173" to be "Ready" ...
	I1123 08:58:30.909053  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 08:58:30.909136  285663 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 08:58:30.910445  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:58:30.957985  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 08:58:31.030873  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.283595348s)
	I1123 08:58:31.030979  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.262913564s)
	I1123 08:58:31.037982  285663 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 08:58:31.038065  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 08:58:31.229554  285663 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 08:58:31.229578  285663 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 08:58:31.236210  285663 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-984173" context rescaled to 1 replicas
	I1123 08:58:31.250799  285663 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 08:58:31.250824  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 08:58:31.266744  285663 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 08:58:31.266818  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 08:58:31.281985  285663 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 08:58:31.282069  285663 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 08:58:31.297140  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1123 08:58:32.758043  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:33.531245  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.750187558s)
	I1123 08:58:33.531362  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.755303257s)
	I1123 08:58:33.722415  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.915668046s)
	I1123 08:58:33.722477  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.909546905s)
	I1123 08:58:34.772399  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.769829607s)
	I1123 08:58:34.772464  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.734529056s)
	I1123 08:58:34.772659  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.607470817s)
	I1123 08:58:34.772774  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.327178983s)
	I1123 08:58:34.772785  285663 addons.go:495] Verifying addon metrics-server=true in "addons-984173"
	I1123 08:58:34.772813  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.317929906s)
	I1123 08:58:34.772822  285663 addons.go:495] Verifying addon registry=true in "addons-984173"
	I1123 08:58:34.773081  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.862572083s)
	W1123 08:58:34.773108  285663 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 08:58:34.773125  285663 retry.go:31] will retry after 169.668057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 08:58:34.773167  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.815107856s)
	I1123 08:58:34.773361  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.848907182s)
	I1123 08:58:34.773397  285663 addons.go:495] Verifying addon ingress=true in "addons-984173"
	I1123 08:58:34.776854  285663 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-984173 service yakd-dashboard -n yakd-dashboard
	
	I1123 08:58:34.776967  285663 out.go:179] * Verifying registry addon...
	I1123 08:58:34.777013  285663 out.go:179] * Verifying ingress addon...
	I1123 08:58:34.781248  285663 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 08:58:34.782278  285663 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W1123 08:58:34.790756  285663 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1123 08:58:34.791585  285663 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 08:58:34.791628  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:34.792141  285663 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 08:58:34.792182  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:34.943916  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:58:35.046997  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.749739942s)
	I1123 08:58:35.047080  285663 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-984173"
	I1123 08:58:35.050126  285663 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 08:58:35.053950  285663 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 08:58:35.062457  285663 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 08:58:35.062530  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 08:58:35.224102  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:35.286433  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:35.286818  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:35.558974  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:35.785266  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:35.786009  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:36.057650  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:36.285399  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:36.286353  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:36.395351  285663 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 08:58:36.395452  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:36.412175  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:36.534846  285663 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 08:58:36.547757  285663 addons.go:239] Setting addon gcp-auth=true in "addons-984173"
	I1123 08:58:36.547857  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:36.548356  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:36.557514  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:36.568314  285663 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 08:58:36.568365  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:36.584754  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:36.785629  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:36.785674  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:37.058271  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:37.285088  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:37.285742  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:37.557703  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:37.642863  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.698902312s)
	I1123 08:58:37.643034  285663 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.074694919s)
	I1123 08:58:37.646322  285663 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:58:37.649210  285663 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 08:58:37.652037  285663 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 08:58:37.652061  285663 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 08:58:37.665915  285663 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 08:58:37.665981  285663 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 08:58:37.679369  285663 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 08:58:37.679397  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 08:58:37.692420  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1123 08:58:37.724607  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:37.787454  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:37.787890  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:38.065161  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:38.196221  285663 addons.go:495] Verifying addon gcp-auth=true in "addons-984173"
	I1123 08:58:38.199309  285663 out.go:179] * Verifying gcp-auth addon...
	I1123 08:58:38.203025  285663 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 08:58:38.218065  285663 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 08:58:38.218153  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:38.285749  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:38.286096  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:38.556806  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:38.706104  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:38.785928  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:38.786270  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:39.057604  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:39.206401  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:39.284802  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:39.285806  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:39.556951  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:39.706633  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:39.786073  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:39.786690  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:40.057109  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:40.206991  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:40.224718  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:40.285878  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:40.286194  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:40.557126  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:40.706094  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:40.785969  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:40.787506  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:41.057724  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:41.206920  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:41.285889  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:41.286061  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:41.557329  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:41.707627  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:41.785681  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:41.785862  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:42.059175  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:42.206781  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:42.224914  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:42.285901  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:42.286315  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:42.557864  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:42.706738  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:42.786170  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:42.786564  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:43.058200  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:43.206581  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:43.285733  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:43.285968  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:43.557758  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:43.707681  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:43.785793  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:43.785830  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:44.058410  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:44.206349  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:44.284973  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:44.286065  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:44.556849  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:44.707277  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:44.723970  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:44.785070  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:44.786356  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:45.058651  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:45.208153  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:45.286051  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:45.287302  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:45.557900  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:45.706914  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:45.785616  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:45.786069  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:46.057300  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:46.206428  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:46.285019  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:46.286284  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:46.557495  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:46.706335  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:46.724264  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:46.785151  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:46.786667  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:47.057330  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:47.205900  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:47.286011  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:47.286279  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:47.556803  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:47.706773  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:47.785810  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:47.785986  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:48.057524  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:48.206525  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:48.284772  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:48.285904  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:48.557126  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:48.706107  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:48.785718  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:48.786242  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:49.057747  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:49.206595  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:49.224323  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:49.285519  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:49.285816  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:49.556800  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:49.706999  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:49.784870  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:49.787266  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:50.057592  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:50.206471  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:50.285431  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:50.285494  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:50.557181  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:50.707483  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:50.785269  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:50.786575  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:51.058283  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:51.207461  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:51.284981  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:51.285847  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:51.557956  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:51.706842  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:51.724338  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:51.785670  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:51.785865  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:52.059450  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:52.206062  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:52.284702  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:52.286358  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:52.557164  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:52.706215  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:52.785316  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:52.786581  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:53.059085  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:53.206309  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:53.284672  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:53.286133  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:53.557342  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:53.710583  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:53.785339  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:53.785946  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:54.057069  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:54.206989  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:54.224942  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:54.284735  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:54.286197  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:54.557214  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:54.706071  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:54.784660  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:54.785347  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:55.058133  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:55.206171  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:55.284945  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:55.286735  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:55.556857  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:55.706967  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:55.785795  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:55.785950  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:56.057293  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:56.206591  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:56.284834  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:56.286378  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:56.557478  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:56.706538  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:56.724289  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:56.785249  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:56.786055  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:57.057131  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:57.206078  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:57.299620  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:57.299698  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:57.557008  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:57.707256  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:57.785741  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:57.786122  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:58.057712  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:58.205794  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:58.285718  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:58.285908  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:58.557935  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:58.706150  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:58.724832  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:58.785796  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:58.785969  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:59.056950  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:59.207050  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:59.286368  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:59.286824  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:59.556815  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:59.706649  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:59.785200  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:59.786067  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:00.068940  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:00.210037  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:00.302811  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:00.302990  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:00.556951  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:00.705466  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:00.784594  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:00.785629  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:01.058065  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:01.206206  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:59:01.224220  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:59:01.284841  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:01.285873  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:01.558381  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:01.706147  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:01.785912  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:01.786210  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:02.059875  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:02.206842  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:02.285692  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:02.285932  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:02.556760  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:02.707078  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:02.785295  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:02.786211  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:03.057483  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:03.207412  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:03.285061  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:03.286175  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:03.557033  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:03.705891  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:59:03.724445  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:59:03.786276  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:03.786503  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:04.057816  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:04.206785  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:04.285786  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:04.286202  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:04.557301  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:04.706309  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:04.784820  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:04.786178  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:05.057326  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:05.206151  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:05.284976  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:05.285886  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:05.557265  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:05.706431  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:05.785634  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:05.785746  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:06.057108  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:06.206250  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:59:06.223992  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:59:06.284544  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:06.285077  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:06.556895  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:06.706675  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:06.785568  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:06.785763  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:07.056979  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:07.205974  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:07.285732  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:07.285841  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:07.557276  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:07.706497  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:07.785709  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:07.786265  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:08.057660  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:08.206775  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:08.285917  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:08.286191  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:08.557110  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:08.706196  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:59:08.724234  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:59:08.784994  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:08.786291  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:09.057143  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:09.205752  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:09.285321  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:09.286947  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:09.557186  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:09.705895  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:09.728771  285663 node_ready.go:49] node "addons-984173" is "Ready"
	I1123 08:59:09.728804  285663 node_ready.go:38] duration metric: took 39.007711408s for node "addons-984173" to be "Ready" ...
	I1123 08:59:09.728819  285663 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:59:09.728900  285663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:59:09.747005  285663 api_server.go:72] duration metric: took 41.330787976s to wait for apiserver process to appear ...
	I1123 08:59:09.747033  285663 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:59:09.747052  285663 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 08:59:09.769899  285663 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 08:59:09.771634  285663 api_server.go:141] control plane version: v1.34.1
	I1123 08:59:09.771670  285663 api_server.go:131] duration metric: took 24.630281ms to wait for apiserver health ...
	I1123 08:59:09.771680  285663 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:59:09.834685  285663 system_pods.go:59] 18 kube-system pods found
	I1123 08:59:09.834723  285663 system_pods.go:61] "coredns-66bc5c9577-d2nfj" [bfd6365a-09c7-4c05-879a-e2eb73527961] Pending
	I1123 08:59:09.834729  285663 system_pods.go:61] "csi-hostpath-attacher-0" [0fc9010e-94b4-4c43-b918-6beb79362f03] Pending
	I1123 08:59:09.834758  285663 system_pods.go:61] "csi-hostpath-resizer-0" [b6ebc0dc-1eaf-429b-8a6a-f632e2fc6e17] Pending
	I1123 08:59:09.834772  285663 system_pods.go:61] "csi-hostpathplugin-2kj78" [61ab5b5b-d56e-493a-99cb-522dc9af7cfe] Pending
	I1123 08:59:09.834776  285663 system_pods.go:61] "etcd-addons-984173" [fd3e24a8-6c25-4b06-91bb-546e6bbf3282] Running
	I1123 08:59:09.834781  285663 system_pods.go:61] "kindnet-694tf" [c26ca19a-a40f-44d6-b753-479597734109] Running
	I1123 08:59:09.834785  285663 system_pods.go:61] "kube-apiserver-addons-984173" [24699bd1-e12c-409d-a518-2986ee6304e4] Running
	I1123 08:59:09.834788  285663 system_pods.go:61] "kube-controller-manager-addons-984173" [db313608-3959-4244-9fdf-e032e049f063] Running
	I1123 08:59:09.834803  285663 system_pods.go:61] "kube-ingress-dns-minikube" [47acf22f-c161-46e7-8b97-692610b92f19] Pending
	I1123 08:59:09.834807  285663 system_pods.go:61] "kube-proxy-wfr86" [161587c4-704b-4433-bd0d-df2bbce113bf] Running
	I1123 08:59:09.834826  285663 system_pods.go:61] "kube-scheduler-addons-984173" [b4634404-b4ec-4f19-a5a0-37e6063ffe91] Running
	I1123 08:59:09.834836  285663 system_pods.go:61] "metrics-server-85b7d694d7-q7k2v" [10e46f11-8afd-4338-abf6-90235104b38c] Pending
	I1123 08:59:09.834840  285663 system_pods.go:61] "registry-6b586f9694-r7jl6" [30719118-851e-4542-a4f8-c89f68f6bd04] Pending
	I1123 08:59:09.834843  285663 system_pods.go:61] "registry-creds-764b6fb674-lxww8" [5a528301-690c-4034-989a-9dd8b4c6b876] Pending
	I1123 08:59:09.834861  285663 system_pods.go:61] "registry-proxy-xt9vl" [0e2ac94e-9a8b-4407-8901-cdf4a4fdfc8a] Pending
	I1123 08:59:09.834873  285663 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gbxvb" [b0c3071e-fd3e-4174-a01b-9138498a07c1] Pending
	I1123 08:59:09.834877  285663 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qrk99" [235dd6f3-7b70-4cd1-b640-d2d877b3a77c] Pending
	I1123 08:59:09.834881  285663 system_pods.go:61] "storage-provisioner" [fbafe21b-5d28-4c82-a702-3ac2a06c124d] Pending
	I1123 08:59:09.834901  285663 system_pods.go:74] duration metric: took 63.202611ms to wait for pod list to return data ...
	I1123 08:59:09.834916  285663 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:59:09.852209  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:09.876672  285663 default_sa.go:45] found service account: "default"
	I1123 08:59:09.876710  285663 default_sa.go:55] duration metric: took 41.786618ms for default service account to be created ...
	I1123 08:59:09.876721  285663 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:59:09.896390  285663 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 08:59:09.896416  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:09.905743  285663 system_pods.go:86] 19 kube-system pods found
	I1123 08:59:09.905774  285663 system_pods.go:89] "coredns-66bc5c9577-d2nfj" [bfd6365a-09c7-4c05-879a-e2eb73527961] Pending
	I1123 08:59:09.905780  285663 system_pods.go:89] "csi-hostpath-attacher-0" [0fc9010e-94b4-4c43-b918-6beb79362f03] Pending
	I1123 08:59:09.905784  285663 system_pods.go:89] "csi-hostpath-resizer-0" [b6ebc0dc-1eaf-429b-8a6a-f632e2fc6e17] Pending
	I1123 08:59:09.905788  285663 system_pods.go:89] "csi-hostpathplugin-2kj78" [61ab5b5b-d56e-493a-99cb-522dc9af7cfe] Pending
	I1123 08:59:09.905791  285663 system_pods.go:89] "etcd-addons-984173" [fd3e24a8-6c25-4b06-91bb-546e6bbf3282] Running
	I1123 08:59:09.905821  285663 system_pods.go:89] "kindnet-694tf" [c26ca19a-a40f-44d6-b753-479597734109] Running
	I1123 08:59:09.905832  285663 system_pods.go:89] "kube-apiserver-addons-984173" [24699bd1-e12c-409d-a518-2986ee6304e4] Running
	I1123 08:59:09.905837  285663 system_pods.go:89] "kube-controller-manager-addons-984173" [db313608-3959-4244-9fdf-e032e049f063] Running
	I1123 08:59:09.905842  285663 system_pods.go:89] "kube-ingress-dns-minikube" [47acf22f-c161-46e7-8b97-692610b92f19] Pending
	I1123 08:59:09.905852  285663 system_pods.go:89] "kube-proxy-wfr86" [161587c4-704b-4433-bd0d-df2bbce113bf] Running
	I1123 08:59:09.905858  285663 system_pods.go:89] "kube-scheduler-addons-984173" [b4634404-b4ec-4f19-a5a0-37e6063ffe91] Running
	I1123 08:59:09.905867  285663 system_pods.go:89] "metrics-server-85b7d694d7-q7k2v" [10e46f11-8afd-4338-abf6-90235104b38c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:59:09.905873  285663 system_pods.go:89] "nvidia-device-plugin-daemonset-brqdp" [3eb0354a-72da-4330-8b65-cdb7395b7a35] Pending
	I1123 08:59:09.905906  285663 system_pods.go:89] "registry-6b586f9694-r7jl6" [30719118-851e-4542-a4f8-c89f68f6bd04] Pending
	I1123 08:59:09.905918  285663 system_pods.go:89] "registry-creds-764b6fb674-lxww8" [5a528301-690c-4034-989a-9dd8b4c6b876] Pending
	I1123 08:59:09.905922  285663 system_pods.go:89] "registry-proxy-xt9vl" [0e2ac94e-9a8b-4407-8901-cdf4a4fdfc8a] Pending
	I1123 08:59:09.905926  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gbxvb" [b0c3071e-fd3e-4174-a01b-9138498a07c1] Pending
	I1123 08:59:09.905936  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qrk99" [235dd6f3-7b70-4cd1-b640-d2d877b3a77c] Pending
	I1123 08:59:09.905940  285663 system_pods.go:89] "storage-provisioner" [fbafe21b-5d28-4c82-a702-3ac2a06c124d] Pending
	I1123 08:59:09.905955  285663 retry.go:31] will retry after 245.945242ms: missing components: kube-dns
	I1123 08:59:10.116359  285663 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 08:59:10.116391  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:10.179100  285663 system_pods.go:86] 19 kube-system pods found
	I1123 08:59:10.179134  285663 system_pods.go:89] "coredns-66bc5c9577-d2nfj" [bfd6365a-09c7-4c05-879a-e2eb73527961] Pending
	I1123 08:59:10.179149  285663 system_pods.go:89] "csi-hostpath-attacher-0" [0fc9010e-94b4-4c43-b918-6beb79362f03] Pending
	I1123 08:59:10.179177  285663 system_pods.go:89] "csi-hostpath-resizer-0" [b6ebc0dc-1eaf-429b-8a6a-f632e2fc6e17] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:59:10.179190  285663 system_pods.go:89] "csi-hostpathplugin-2kj78" [61ab5b5b-d56e-493a-99cb-522dc9af7cfe] Pending
	I1123 08:59:10.179196  285663 system_pods.go:89] "etcd-addons-984173" [fd3e24a8-6c25-4b06-91bb-546e6bbf3282] Running
	I1123 08:59:10.179201  285663 system_pods.go:89] "kindnet-694tf" [c26ca19a-a40f-44d6-b753-479597734109] Running
	I1123 08:59:10.179224  285663 system_pods.go:89] "kube-apiserver-addons-984173" [24699bd1-e12c-409d-a518-2986ee6304e4] Running
	I1123 08:59:10.179236  285663 system_pods.go:89] "kube-controller-manager-addons-984173" [db313608-3959-4244-9fdf-e032e049f063] Running
	I1123 08:59:10.179243  285663 system_pods.go:89] "kube-ingress-dns-minikube" [47acf22f-c161-46e7-8b97-692610b92f19] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:59:10.179254  285663 system_pods.go:89] "kube-proxy-wfr86" [161587c4-704b-4433-bd0d-df2bbce113bf] Running
	I1123 08:59:10.179259  285663 system_pods.go:89] "kube-scheduler-addons-984173" [b4634404-b4ec-4f19-a5a0-37e6063ffe91] Running
	I1123 08:59:10.179265  285663 system_pods.go:89] "metrics-server-85b7d694d7-q7k2v" [10e46f11-8afd-4338-abf6-90235104b38c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:59:10.179274  285663 system_pods.go:89] "nvidia-device-plugin-daemonset-brqdp" [3eb0354a-72da-4330-8b65-cdb7395b7a35] Pending
	I1123 08:59:10.179280  285663 system_pods.go:89] "registry-6b586f9694-r7jl6" [30719118-851e-4542-a4f8-c89f68f6bd04] Pending
	I1123 08:59:10.179285  285663 system_pods.go:89] "registry-creds-764b6fb674-lxww8" [5a528301-690c-4034-989a-9dd8b4c6b876] Pending
	I1123 08:59:10.179304  285663 system_pods.go:89] "registry-proxy-xt9vl" [0e2ac94e-9a8b-4407-8901-cdf4a4fdfc8a] Pending
	I1123 08:59:10.179319  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gbxvb" [b0c3071e-fd3e-4174-a01b-9138498a07c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:10.179336  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qrk99" [235dd6f3-7b70-4cd1-b640-d2d877b3a77c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:10.179350  285663 system_pods.go:89] "storage-provisioner" [fbafe21b-5d28-4c82-a702-3ac2a06c124d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:59:10.179380  285663 retry.go:31] will retry after 336.66339ms: missing components: kube-dns
	I1123 08:59:10.268006  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:10.291167  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:10.292864  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:10.523566  285663 system_pods.go:86] 19 kube-system pods found
	I1123 08:59:10.523604  285663 system_pods.go:89] "coredns-66bc5c9577-d2nfj" [bfd6365a-09c7-4c05-879a-e2eb73527961] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:59:10.523637  285663 system_pods.go:89] "csi-hostpath-attacher-0" [0fc9010e-94b4-4c43-b918-6beb79362f03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:59:10.523653  285663 system_pods.go:89] "csi-hostpath-resizer-0" [b6ebc0dc-1eaf-429b-8a6a-f632e2fc6e17] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:59:10.523661  285663 system_pods.go:89] "csi-hostpathplugin-2kj78" [61ab5b5b-d56e-493a-99cb-522dc9af7cfe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:59:10.523666  285663 system_pods.go:89] "etcd-addons-984173" [fd3e24a8-6c25-4b06-91bb-546e6bbf3282] Running
	I1123 08:59:10.523671  285663 system_pods.go:89] "kindnet-694tf" [c26ca19a-a40f-44d6-b753-479597734109] Running
	I1123 08:59:10.523678  285663 system_pods.go:89] "kube-apiserver-addons-984173" [24699bd1-e12c-409d-a518-2986ee6304e4] Running
	I1123 08:59:10.523700  285663 system_pods.go:89] "kube-controller-manager-addons-984173" [db313608-3959-4244-9fdf-e032e049f063] Running
	I1123 08:59:10.523714  285663 system_pods.go:89] "kube-ingress-dns-minikube" [47acf22f-c161-46e7-8b97-692610b92f19] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:59:10.523719  285663 system_pods.go:89] "kube-proxy-wfr86" [161587c4-704b-4433-bd0d-df2bbce113bf] Running
	I1123 08:59:10.523738  285663 system_pods.go:89] "kube-scheduler-addons-984173" [b4634404-b4ec-4f19-a5a0-37e6063ffe91] Running
	I1123 08:59:10.523751  285663 system_pods.go:89] "metrics-server-85b7d694d7-q7k2v" [10e46f11-8afd-4338-abf6-90235104b38c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:59:10.523759  285663 system_pods.go:89] "nvidia-device-plugin-daemonset-brqdp" [3eb0354a-72da-4330-8b65-cdb7395b7a35] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:59:10.523780  285663 system_pods.go:89] "registry-6b586f9694-r7jl6" [30719118-851e-4542-a4f8-c89f68f6bd04] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:59:10.523793  285663 system_pods.go:89] "registry-creds-764b6fb674-lxww8" [5a528301-690c-4034-989a-9dd8b4c6b876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:59:10.523801  285663 system_pods.go:89] "registry-proxy-xt9vl" [0e2ac94e-9a8b-4407-8901-cdf4a4fdfc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:59:10.523824  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gbxvb" [b0c3071e-fd3e-4174-a01b-9138498a07c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:10.523832  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qrk99" [235dd6f3-7b70-4cd1-b640-d2d877b3a77c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:10.523858  285663 system_pods.go:89] "storage-provisioner" [fbafe21b-5d28-4c82-a702-3ac2a06c124d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:59:10.523881  285663 retry.go:31] will retry after 345.682297ms: missing components: kube-dns
	I1123 08:59:10.622541  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:10.721921  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:10.823148  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:10.823590  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:10.923728  285663 system_pods.go:86] 19 kube-system pods found
	I1123 08:59:10.923766  285663 system_pods.go:89] "coredns-66bc5c9577-d2nfj" [bfd6365a-09c7-4c05-879a-e2eb73527961] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:59:10.923798  285663 system_pods.go:89] "csi-hostpath-attacher-0" [0fc9010e-94b4-4c43-b918-6beb79362f03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:59:10.923813  285663 system_pods.go:89] "csi-hostpath-resizer-0" [b6ebc0dc-1eaf-429b-8a6a-f632e2fc6e17] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:59:10.923820  285663 system_pods.go:89] "csi-hostpathplugin-2kj78" [61ab5b5b-d56e-493a-99cb-522dc9af7cfe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:59:10.923825  285663 system_pods.go:89] "etcd-addons-984173" [fd3e24a8-6c25-4b06-91bb-546e6bbf3282] Running
	I1123 08:59:10.923838  285663 system_pods.go:89] "kindnet-694tf" [c26ca19a-a40f-44d6-b753-479597734109] Running
	I1123 08:59:10.923843  285663 system_pods.go:89] "kube-apiserver-addons-984173" [24699bd1-e12c-409d-a518-2986ee6304e4] Running
	I1123 08:59:10.923847  285663 system_pods.go:89] "kube-controller-manager-addons-984173" [db313608-3959-4244-9fdf-e032e049f063] Running
	I1123 08:59:10.923872  285663 system_pods.go:89] "kube-ingress-dns-minikube" [47acf22f-c161-46e7-8b97-692610b92f19] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:59:10.923882  285663 system_pods.go:89] "kube-proxy-wfr86" [161587c4-704b-4433-bd0d-df2bbce113bf] Running
	I1123 08:59:10.923887  285663 system_pods.go:89] "kube-scheduler-addons-984173" [b4634404-b4ec-4f19-a5a0-37e6063ffe91] Running
	I1123 08:59:10.923894  285663 system_pods.go:89] "metrics-server-85b7d694d7-q7k2v" [10e46f11-8afd-4338-abf6-90235104b38c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:59:10.923905  285663 system_pods.go:89] "nvidia-device-plugin-daemonset-brqdp" [3eb0354a-72da-4330-8b65-cdb7395b7a35] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:59:10.923912  285663 system_pods.go:89] "registry-6b586f9694-r7jl6" [30719118-851e-4542-a4f8-c89f68f6bd04] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:59:10.923923  285663 system_pods.go:89] "registry-creds-764b6fb674-lxww8" [5a528301-690c-4034-989a-9dd8b4c6b876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:59:10.923946  285663 system_pods.go:89] "registry-proxy-xt9vl" [0e2ac94e-9a8b-4407-8901-cdf4a4fdfc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:59:10.923961  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gbxvb" [b0c3071e-fd3e-4174-a01b-9138498a07c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:10.923981  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qrk99" [235dd6f3-7b70-4cd1-b640-d2d877b3a77c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:10.923996  285663 system_pods.go:89] "storage-provisioner" [fbafe21b-5d28-4c82-a702-3ac2a06c124d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:59:10.924028  285663 retry.go:31] will retry after 601.407037ms: missing components: kube-dns
	I1123 08:59:11.058157  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:11.206268  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:11.286819  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:11.286868  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:11.530156  285663 system_pods.go:86] 19 kube-system pods found
	I1123 08:59:11.530199  285663 system_pods.go:89] "coredns-66bc5c9577-d2nfj" [bfd6365a-09c7-4c05-879a-e2eb73527961] Running
	I1123 08:59:11.530211  285663 system_pods.go:89] "csi-hostpath-attacher-0" [0fc9010e-94b4-4c43-b918-6beb79362f03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:59:11.530220  285663 system_pods.go:89] "csi-hostpath-resizer-0" [b6ebc0dc-1eaf-429b-8a6a-f632e2fc6e17] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:59:11.530229  285663 system_pods.go:89] "csi-hostpathplugin-2kj78" [61ab5b5b-d56e-493a-99cb-522dc9af7cfe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:59:11.530234  285663 system_pods.go:89] "etcd-addons-984173" [fd3e24a8-6c25-4b06-91bb-546e6bbf3282] Running
	I1123 08:59:11.530239  285663 system_pods.go:89] "kindnet-694tf" [c26ca19a-a40f-44d6-b753-479597734109] Running
	I1123 08:59:11.530245  285663 system_pods.go:89] "kube-apiserver-addons-984173" [24699bd1-e12c-409d-a518-2986ee6304e4] Running
	I1123 08:59:11.530249  285663 system_pods.go:89] "kube-controller-manager-addons-984173" [db313608-3959-4244-9fdf-e032e049f063] Running
	I1123 08:59:11.530256  285663 system_pods.go:89] "kube-ingress-dns-minikube" [47acf22f-c161-46e7-8b97-692610b92f19] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:59:11.530272  285663 system_pods.go:89] "kube-proxy-wfr86" [161587c4-704b-4433-bd0d-df2bbce113bf] Running
	I1123 08:59:11.530278  285663 system_pods.go:89] "kube-scheduler-addons-984173" [b4634404-b4ec-4f19-a5a0-37e6063ffe91] Running
	I1123 08:59:11.530288  285663 system_pods.go:89] "metrics-server-85b7d694d7-q7k2v" [10e46f11-8afd-4338-abf6-90235104b38c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:59:11.530295  285663 system_pods.go:89] "nvidia-device-plugin-daemonset-brqdp" [3eb0354a-72da-4330-8b65-cdb7395b7a35] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:59:11.530304  285663 system_pods.go:89] "registry-6b586f9694-r7jl6" [30719118-851e-4542-a4f8-c89f68f6bd04] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:59:11.530311  285663 system_pods.go:89] "registry-creds-764b6fb674-lxww8" [5a528301-690c-4034-989a-9dd8b4c6b876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:59:11.530317  285663 system_pods.go:89] "registry-proxy-xt9vl" [0e2ac94e-9a8b-4407-8901-cdf4a4fdfc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:59:11.530326  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gbxvb" [b0c3071e-fd3e-4174-a01b-9138498a07c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:11.530332  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qrk99" [235dd6f3-7b70-4cd1-b640-d2d877b3a77c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:11.530346  285663 system_pods.go:89] "storage-provisioner" [fbafe21b-5d28-4c82-a702-3ac2a06c124d] Running
	I1123 08:59:11.530364  285663 system_pods.go:126] duration metric: took 1.653636605s to wait for k8s-apps to be running ...
	I1123 08:59:11.530377  285663 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:59:11.530442  285663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:59:11.543333  285663 system_svc.go:56] duration metric: took 12.934721ms WaitForService to wait for kubelet
	I1123 08:59:11.543368  285663 kubeadm.go:587] duration metric: took 43.127152055s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:59:11.543384  285663 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:59:11.546413  285663 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:59:11.546440  285663 node_conditions.go:123] node cpu capacity is 2
	I1123 08:59:11.546457  285663 node_conditions.go:105] duration metric: took 3.066199ms to run NodePressure ...
	I1123 08:59:11.546469  285663 start.go:242] waiting for startup goroutines ...
	I1123 08:59:11.557653  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:11.707453  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:11.807997  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:11.809210  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:12.060168  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:12.206330  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:12.304322  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:12.304443  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:12.558757  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:12.707705  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:12.789325  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:12.789786  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:13.057571  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:13.207274  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:13.288273  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:13.288659  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:13.562251  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:13.706525  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:13.809190  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:13.809602  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:14.059012  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:14.207595  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:14.287572  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:14.287953  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:14.558273  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:14.707819  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:14.788358  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:14.788765  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:15.057836  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:15.207113  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:15.287754  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:15.288245  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:15.558445  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:15.706378  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:15.787266  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:15.787397  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:16.065978  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:16.207467  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:16.288059  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:16.288425  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:16.558389  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:16.707697  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:16.787842  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:16.788241  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:17.058145  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:17.206218  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:17.286992  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:17.287471  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:17.558135  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:17.706682  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:17.786500  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:17.787439  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:18.057986  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:18.206413  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:18.287156  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:18.287624  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:18.558956  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:18.706411  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:18.786295  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:18.786500  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:19.057752  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:19.206440  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:19.286117  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:19.286651  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:19.559235  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:19.706521  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:19.786155  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:19.786879  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:20.057703  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:20.206836  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:20.287331  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:20.287753  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:20.558658  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:20.707274  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:20.787873  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:20.788392  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:21.058148  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:21.206355  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:21.285918  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:21.287245  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:21.557214  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:21.706623  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:21.786967  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:21.787247  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:22.057686  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:22.206870  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:22.285356  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:22.287159  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:22.557882  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:22.707609  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:22.786697  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:22.787128  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:23.057931  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:23.206898  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:23.285323  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:23.287686  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:23.557965  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:23.707041  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:23.786572  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:23.786720  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:24.059242  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:24.206201  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:24.285226  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:24.287927  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:24.558512  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:24.707000  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:24.786575  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:24.786704  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:25.057842  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:25.206976  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:25.285652  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:25.286782  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:25.557271  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:25.706613  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:25.788376  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:25.788770  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:26.058970  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:26.205847  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:26.284800  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:26.285789  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:26.558597  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:26.707662  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:26.809182  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:26.809573  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:27.060123  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:27.206624  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:27.287521  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:27.287683  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:27.558615  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:27.707383  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:27.787736  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:27.787771  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:28.058865  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:28.207081  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:28.284838  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:28.286990  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:28.557871  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:28.706559  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:28.787216  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:28.787671  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:29.057735  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:29.207085  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:29.286868  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:29.287771  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:29.558198  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:29.706853  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:29.799745  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:29.799904  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:30.058837  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:30.207203  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:30.287860  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:30.288237  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:30.558256  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:30.706212  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:30.787179  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:30.787761  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:31.057577  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:31.206896  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:31.285342  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:31.286060  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:31.557550  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:31.706409  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:31.786770  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:31.787164  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:32.059631  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:32.207229  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:32.286559  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:32.287386  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:32.558475  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:32.705812  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:32.785184  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:32.786980  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:33.057242  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:33.206159  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:33.286587  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:33.286804  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:33.558068  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:33.706155  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:33.786219  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:33.786849  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:34.057688  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:34.206480  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:34.291641  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:34.291796  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:34.557516  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:34.706908  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:34.808472  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:34.808647  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:35.058618  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:35.206564  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:35.287051  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:35.287368  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:35.558838  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:35.707169  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:35.787100  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:35.787468  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:36.058854  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:36.206050  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:36.286173  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:36.286304  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:36.558451  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:36.706403  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:36.785840  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:36.786876  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:37.057652  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:37.206603  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:37.286884  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:37.287182  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:37.557530  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:37.707156  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:37.785390  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:37.786199  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:38.058548  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:38.207544  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:38.287370  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:38.287747  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:38.557881  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:38.706223  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:38.786066  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:38.788535  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:39.058211  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:39.206836  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:39.288297  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:39.289632  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:39.558356  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:39.708254  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:39.787941  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:39.788304  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:40.066679  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:40.210972  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:40.287289  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:40.288045  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:40.558362  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:40.710048  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:40.822616  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:40.822803  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:41.057044  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:41.206774  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:41.286308  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:41.286944  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:41.571896  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:41.707196  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:41.789390  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:41.789860  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:42.058206  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:42.206752  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:42.308165  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:42.308526  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:42.558950  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:42.706597  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:42.807428  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:42.807841  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:43.068378  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:43.217823  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:43.286452  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:43.287006  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:43.561525  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:43.706759  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:43.786075  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:43.786794  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:44.062381  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:44.206639  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:44.286647  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:44.287003  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:44.562719  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:44.706950  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:44.786627  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:44.787071  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:45.068006  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:45.210150  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:45.288734  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:45.289436  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:45.563118  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:45.706185  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:45.787333  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:45.787542  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:46.058572  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:46.206889  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:46.287207  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:46.287859  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:46.557281  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:46.706563  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:46.808174  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:46.808551  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:47.057882  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:47.206981  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:47.285085  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:47.286336  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:47.557474  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:47.706386  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:47.785563  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:47.786240  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:48.058127  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:48.207056  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:48.286875  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:48.287145  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:48.557337  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:48.707085  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:48.786535  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:48.786841  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:49.057747  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:49.206601  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:49.289319  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:49.289359  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:49.557828  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:49.706595  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:49.786750  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:49.786876  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:50.057933  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:50.205917  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:50.285804  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:50.285985  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:50.557553  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:50.708066  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:50.786983  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:50.787337  285663 kapi.go:107] duration metric: took 1m16.006090155s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 08:59:51.058966  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:51.206151  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:51.286641  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:51.558469  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:51.706824  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:51.787245  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:52.057989  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:52.206216  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:52.286134  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:52.557065  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:52.706308  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:52.786351  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:53.057937  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:53.206223  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:53.288275  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:53.558202  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:53.709112  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:53.788369  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:54.064467  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:54.208290  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:54.289503  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:54.558431  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:54.706725  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:54.785598  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:55.060214  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:55.207018  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:55.287433  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:55.557666  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:55.707961  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:55.788247  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:56.057909  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:56.206368  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:56.286253  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:56.558191  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:56.707158  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:56.785971  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:57.059033  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:57.206127  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:57.286323  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:57.557542  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:57.706334  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:57.787247  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:58.059585  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:58.207340  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:58.289356  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:58.558343  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:58.709858  285663 kapi.go:107] duration metric: took 1m20.506831383s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 08:59:58.712704  285663 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-984173 cluster.
	I1123 08:59:58.715479  285663 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 08:59:58.719558  285663 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 08:59:58.786928  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:59.059119  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:59.286704  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:59.556837  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:59.785913  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:00.059350  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:00.346038  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:00.561370  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:00.794383  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:01.062573  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:01.317866  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:01.593710  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:01.789557  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:02.059389  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:02.287887  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:02.558527  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:02.787613  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:03.059061  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:03.287018  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:03.557399  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:03.786963  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:04.057812  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:04.285840  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:04.558134  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:04.787786  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:05.061177  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:05.287161  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:05.558534  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:05.786847  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:06.066618  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:06.288763  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:06.560441  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:06.786963  285663 kapi.go:107] duration metric: took 1m32.004680286s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 09:00:07.057606  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:07.563413  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:08.088462  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:08.562658  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:09.058866  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:09.557767  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:10.058897  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:10.558438  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:11.060614  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:11.557623  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:12.058231  285663 kapi.go:107] duration metric: took 1m37.004281849s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 09:00:12.061375  285663 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, storage-provisioner, inspektor-gadget, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1123 09:00:12.064266  285663 addons.go:530] duration metric: took 1m43.647604139s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin registry-creds storage-provisioner inspektor-gadget cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1123 09:00:12.064336  285663 start.go:247] waiting for cluster config update ...
	I1123 09:00:12.064360  285663 start.go:256] writing updated cluster config ...
	I1123 09:00:12.064669  285663 ssh_runner.go:195] Run: rm -f paused
	I1123 09:00:12.069513  285663 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:00:12.158061  285663 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d2nfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.163423  285663 pod_ready.go:94] pod "coredns-66bc5c9577-d2nfj" is "Ready"
	I1123 09:00:12.163452  285663 pod_ready.go:86] duration metric: took 5.363546ms for pod "coredns-66bc5c9577-d2nfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.166190  285663 pod_ready.go:83] waiting for pod "etcd-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.171073  285663 pod_ready.go:94] pod "etcd-addons-984173" is "Ready"
	I1123 09:00:12.171101  285663 pod_ready.go:86] duration metric: took 4.881119ms for pod "etcd-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.173593  285663 pod_ready.go:83] waiting for pod "kube-apiserver-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.178228  285663 pod_ready.go:94] pod "kube-apiserver-addons-984173" is "Ready"
	I1123 09:00:12.178258  285663 pod_ready.go:86] duration metric: took 4.637703ms for pod "kube-apiserver-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.180670  285663 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.473745  285663 pod_ready.go:94] pod "kube-controller-manager-addons-984173" is "Ready"
	I1123 09:00:12.473774  285663 pod_ready.go:86] duration metric: took 293.078777ms for pod "kube-controller-manager-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.674016  285663 pod_ready.go:83] waiting for pod "kube-proxy-wfr86" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:13.076556  285663 pod_ready.go:94] pod "kube-proxy-wfr86" is "Ready"
	I1123 09:00:13.076595  285663 pod_ready.go:86] duration metric: took 402.557863ms for pod "kube-proxy-wfr86" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:13.274256  285663 pod_ready.go:83] waiting for pod "kube-scheduler-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:13.673573  285663 pod_ready.go:94] pod "kube-scheduler-addons-984173" is "Ready"
	I1123 09:00:13.673606  285663 pod_ready.go:86] duration metric: took 399.318929ms for pod "kube-scheduler-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:13.673621  285663 pod_ready.go:40] duration metric: took 1.604071389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:00:13.747374  285663 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:00:13.753917  285663 out.go:179] * Done! kubectl is now configured to use "addons-984173" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:03:02 addons-984173 crio[828]: time="2025-11-23T09:03:02.515579183Z" level=info msg="Removed container 3cd176b45659703e342900d5c25f5a4e9e5f4bc67d1d43cf22df768775b16c09: kube-system/registry-creds-764b6fb674-lxww8/registry-creds" id=6f1c9f96-e150-4b5e-820c-3b82fe7a436d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:03:19 addons-984173 crio[828]: time="2025-11-23T09:03:19.842983012Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-sk2ww/POD" id=0f9183aa-97d5-425a-a41d-25f1a2ad01c1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:03:19 addons-984173 crio[828]: time="2025-11-23T09:03:19.843059674Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:03:19 addons-984173 crio[828]: time="2025-11-23T09:03:19.859829498Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-sk2ww Namespace:default ID:42a8cd24ee760eb110df8bbe2be3ba854522d5002e765d4097855457baf24f85 UID:0ee4ce29-1e17-40b9-afb0-087ca8a79816 NetNS:/var/run/netns/4c8dfddb-081b-490d-be7a-0101c596d133 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001fbc910}] Aliases:map[]}"
	Nov 23 09:03:19 addons-984173 crio[828]: time="2025-11-23T09:03:19.85998996Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-sk2ww to CNI network \"kindnet\" (type=ptp)"
	Nov 23 09:03:19 addons-984173 crio[828]: time="2025-11-23T09:03:19.878382105Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-sk2ww Namespace:default ID:42a8cd24ee760eb110df8bbe2be3ba854522d5002e765d4097855457baf24f85 UID:0ee4ce29-1e17-40b9-afb0-087ca8a79816 NetNS:/var/run/netns/4c8dfddb-081b-490d-be7a-0101c596d133 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001fbc910}] Aliases:map[]}"
	Nov 23 09:03:19 addons-984173 crio[828]: time="2025-11-23T09:03:19.87871294Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-sk2ww for CNI network kindnet (type=ptp)"
	Nov 23 09:03:19 addons-984173 crio[828]: time="2025-11-23T09:03:19.884425041Z" level=info msg="Ran pod sandbox 42a8cd24ee760eb110df8bbe2be3ba854522d5002e765d4097855457baf24f85 with infra container: default/hello-world-app-5d498dc89-sk2ww/POD" id=0f9183aa-97d5-425a-a41d-25f1a2ad01c1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:03:19 addons-984173 crio[828]: time="2025-11-23T09:03:19.886888512Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=49d14c3c-27a3-4052-b36a-35896bdd0803 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:03:19 addons-984173 crio[828]: time="2025-11-23T09:03:19.887135366Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=49d14c3c-27a3-4052-b36a-35896bdd0803 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:03:19 addons-984173 crio[828]: time="2025-11-23T09:03:19.88724425Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=49d14c3c-27a3-4052-b36a-35896bdd0803 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:03:19 addons-984173 crio[828]: time="2025-11-23T09:03:19.888491296Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=69a2de32-a705-4f39-b6e0-b3e5c645d450 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:03:19 addons-984173 crio[828]: time="2025-11-23T09:03:19.899292713Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 23 09:03:20 addons-984173 crio[828]: time="2025-11-23T09:03:20.57262136Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=69a2de32-a705-4f39-b6e0-b3e5c645d450 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:03:20 addons-984173 crio[828]: time="2025-11-23T09:03:20.574165247Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=27df887c-3a91-4d08-a509-f15692df06ae name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:03:20 addons-984173 crio[828]: time="2025-11-23T09:03:20.576204845Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5d31c58b-c63c-4dfc-a99c-9593e1877980 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:03:20 addons-984173 crio[828]: time="2025-11-23T09:03:20.583475217Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-sk2ww/hello-world-app" id=4fd0c321-b934-43d5-a332-231a1e2a574d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:03:20 addons-984173 crio[828]: time="2025-11-23T09:03:20.583650251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:03:20 addons-984173 crio[828]: time="2025-11-23T09:03:20.596708318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:03:20 addons-984173 crio[828]: time="2025-11-23T09:03:20.596925346Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8230f4eae7a255531fb61e1aed498fcd355f3e85684ec82d1e9ad3ec9c3bdcb4/merged/etc/passwd: no such file or directory"
	Nov 23 09:03:20 addons-984173 crio[828]: time="2025-11-23T09:03:20.596957937Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8230f4eae7a255531fb61e1aed498fcd355f3e85684ec82d1e9ad3ec9c3bdcb4/merged/etc/group: no such file or directory"
	Nov 23 09:03:20 addons-984173 crio[828]: time="2025-11-23T09:03:20.60285432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:03:20 addons-984173 crio[828]: time="2025-11-23T09:03:20.632767775Z" level=info msg="Created container fac971051d17a6a209e5692de77f51069d13f543725fca903d3134363ef9c357: default/hello-world-app-5d498dc89-sk2ww/hello-world-app" id=4fd0c321-b934-43d5-a332-231a1e2a574d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:03:20 addons-984173 crio[828]: time="2025-11-23T09:03:20.63389446Z" level=info msg="Starting container: fac971051d17a6a209e5692de77f51069d13f543725fca903d3134363ef9c357" id=2a1f8ae6-859d-4a55-b69f-84d43823a825 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:03:20 addons-984173 crio[828]: time="2025-11-23T09:03:20.639081155Z" level=info msg="Started container" PID=7195 containerID=fac971051d17a6a209e5692de77f51069d13f543725fca903d3134363ef9c357 description=default/hello-world-app-5d498dc89-sk2ww/hello-world-app id=2a1f8ae6-859d-4a55-b69f-84d43823a825 name=/runtime.v1.RuntimeService/StartContainer sandboxID=42a8cd24ee760eb110df8bbe2be3ba854522d5002e765d4097855457baf24f85
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	fac971051d17a       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   42a8cd24ee760       hello-world-app-5d498dc89-sk2ww            default
	7d99fc289a2be       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             19 seconds ago           Exited              registry-creds                           4                   a4cd1c254008b       registry-creds-764b6fb674-lxww8            kube-system
	6b6a35442d4b4       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   dd7c6a3d077b4       nginx                                      default
	abf50150c6b0d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   795e8ad4bf31f       busybox                                    default
	742ade421fb24       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   f616696f74e87       csi-hostpathplugin-2kj78                   kube-system
	f6783f9da9552       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   f616696f74e87       csi-hostpathplugin-2kj78                   kube-system
	37d6af059fa8d       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   f616696f74e87       csi-hostpathplugin-2kj78                   kube-system
	66657c8a6cec5       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   f616696f74e87       csi-hostpathplugin-2kj78                   kube-system
	497989a5477b2       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   b3ca61edb8e18       ingress-nginx-controller-6c8bf45fb-gr75s   ingress-nginx
	4a940b19c91cb       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   283e348dea31b       gcp-auth-78565c9fb4-ks57h                  gcp-auth
	8586599f3919f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   f616696f74e87       csi-hostpathplugin-2kj78                   kube-system
	96443e9c408d8       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   673e7e0ed84f5       gadget-7lvml                               gadget
	957a25f0a87eb       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             3 minutes ago            Exited              patch                                    2                   77ed41f1a8977       ingress-nginx-admission-patch-dhzqh        ingress-nginx
	6f902ae88d97e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   2aa485ec26bc9       registry-proxy-xt9vl                       kube-system
	75511f019181b       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   33d5e9c0058db       nvidia-device-plugin-daemonset-brqdp       kube-system
	de8e74b6f79cb       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   58a50852f3c21       snapshot-controller-7d9fbc56b8-qrk99       kube-system
	f4b7a6278f7aa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   ea2db8b3a1fa7       ingress-nginx-admission-create-t4d4b       ingress-nginx
	575e9ea051577       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   4fd4fc271ae14       registry-6b586f9694-r7jl6                  kube-system
	2b31531176241       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   f616696f74e87       csi-hostpathplugin-2kj78                   kube-system
	bbd54f9144620       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   637b5019f023b       csi-hostpath-resizer-0                     kube-system
	3c3749cfa9b1e       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   7f514ef75329f       csi-hostpath-attacher-0                    kube-system
	f93636a2eb282       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   76497357f2e3d       snapshot-controller-7d9fbc56b8-gbxvb       kube-system
	5f99b88dae427       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   df984c1630bcd       yakd-dashboard-5ff678cb9-8c2d4             yakd-dashboard
	8f1edccdddb80       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   c3091ae4d59d7       kube-ingress-dns-minikube                  kube-system
	d833df8b1059c       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               4 minutes ago            Running             cloud-spanner-emulator                   0                   d48f1904d4f6d       cloud-spanner-emulator-5bdddb765-272hq     default
	27c8f23d0b241       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   61cee50775478       local-path-provisioner-648f6765c9-psfzp    local-path-storage
	1559bd52645fb       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   d26d6967363f5       metrics-server-85b7d694d7-q7k2v            kube-system
	6c78922b69b65       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   e09e45ceae81a       coredns-66bc5c9577-d2nfj                   kube-system
	de914953e20a9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   ecea510f4a783       storage-provisioner                        kube-system
	87bae25a4298b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   2123de48ed60f       kube-proxy-wfr86                           kube-system
	529e3e6584de1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   624c4ba5d4732       kindnet-694tf                              kube-system
	22aab316066d2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   1b6718cc289ba       kube-controller-manager-addons-984173      kube-system
	d9e34f2271d2d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   515260730e8da       etcd-addons-984173                         kube-system
	61a76b638e0c8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   93de3b7164280       kube-scheduler-addons-984173               kube-system
	126a521cf3c9c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   621e70c0d457e       kube-apiserver-addons-984173               kube-system
	
	
	==> coredns [6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da] <==
	[INFO] 10.244.0.18:45567 - 55019 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.031670232s
	[INFO] 10.244.0.18:45567 - 34008 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000125925s
	[INFO] 10.244.0.18:45567 - 19766 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00016385s
	[INFO] 10.244.0.18:44761 - 54278 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150566s
	[INFO] 10.244.0.18:44761 - 54073 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000216241s
	[INFO] 10.244.0.18:60954 - 33700 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109335s
	[INFO] 10.244.0.18:60954 - 33511 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000181032s
	[INFO] 10.244.0.18:37864 - 1551 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107586s
	[INFO] 10.244.0.18:37864 - 1371 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095196s
	[INFO] 10.244.0.18:46406 - 7599 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002389786s
	[INFO] 10.244.0.18:46406 - 7385 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00240705s
	[INFO] 10.244.0.18:45602 - 17654 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000181474s
	[INFO] 10.244.0.18:45602 - 17834 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000242711s
	[INFO] 10.244.0.20:58756 - 4153 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00021757s
	[INFO] 10.244.0.20:49572 - 9940 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000090742s
	[INFO] 10.244.0.20:41829 - 53885 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000168462s
	[INFO] 10.244.0.20:33570 - 46025 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000281792s
	[INFO] 10.244.0.20:40211 - 51999 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116243s
	[INFO] 10.244.0.20:60296 - 54940 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000302904s
	[INFO] 10.244.0.20:41203 - 42266 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003383725s
	[INFO] 10.244.0.20:42253 - 40498 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002493548s
	[INFO] 10.244.0.20:39723 - 9600 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001181698s
	[INFO] 10.244.0.20:45445 - 48542 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001976333s
	[INFO] 10.244.0.23:40262 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000176051s
	[INFO] 10.244.0.23:59836 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101852s
	
	
	==> describe nodes <==
	Name:               addons-984173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-984173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=addons-984173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_58_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-984173
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-984173"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:58:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-984173
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:03:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:02:57 +0000   Sun, 23 Nov 2025 08:58:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:02:57 +0000   Sun, 23 Nov 2025 08:58:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:02:57 +0000   Sun, 23 Nov 2025 08:58:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:02:57 +0000   Sun, 23 Nov 2025 08:59:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-984173
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                6a936c40-0715-486d-ba6b-a609979f7ac2
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     cloud-spanner-emulator-5bdddb765-272hq      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  default                     hello-world-app-5d498dc89-sk2ww             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-7lvml                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  gcp-auth                    gcp-auth-78565c9fb4-ks57h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-gr75s    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m47s
	  kube-system                 coredns-66bc5c9577-d2nfj                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m53s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpathplugin-2kj78                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 etcd-addons-984173                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m58s
	  kube-system                 kindnet-694tf                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m53s
	  kube-system                 kube-apiserver-addons-984173                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-controller-manager-addons-984173       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-proxy-wfr86                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-scheduler-addons-984173                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 metrics-server-85b7d694d7-q7k2v             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m49s
	  kube-system                 nvidia-device-plugin-daemonset-brqdp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 registry-6b586f9694-r7jl6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 registry-creds-764b6fb674-lxww8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 registry-proxy-xt9vl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 snapshot-controller-7d9fbc56b8-gbxvb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 snapshot-controller-7d9fbc56b8-qrk99        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  local-path-storage          local-path-provisioner-648f6765c9-psfzp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8c2d4              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m52s  kube-proxy       
	  Normal   Starting                 4m59s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m59s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m58s  kubelet          Node addons-984173 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m58s  kubelet          Node addons-984173 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m58s  kubelet          Node addons-984173 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m54s  node-controller  Node addons-984173 event: Registered Node addons-984173 in Controller
	  Normal   NodeReady                4m12s  kubelet          Node addons-984173 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015154] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511595] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034200] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753844] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.833249] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:37] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	[Nov23 08:56] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	[  +0.083595] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26] <==
	{"level":"warn","ts":"2025-11-23T08:58:19.155842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.166418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.180317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.198133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.213826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.243884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.253850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.269727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.286040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.297897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.326953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.338338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.351345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.372997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.390196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.433637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.466281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.494123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.585924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:35.275219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:35.280014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:57.277002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:57.294553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:57.319200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:57.333017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45510","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [4a940b19c91cba536d1faf436f7e4cf75e22128cd7881abb3fc0d5bdca59149d] <==
	2025/11/23 08:59:58 GCP Auth Webhook started!
	2025/11/23 09:00:14 Ready to marshal response ...
	2025/11/23 09:00:14 Ready to write response ...
	2025/11/23 09:00:14 Ready to marshal response ...
	2025/11/23 09:00:14 Ready to write response ...
	2025/11/23 09:00:14 Ready to marshal response ...
	2025/11/23 09:00:14 Ready to write response ...
	2025/11/23 09:00:33 Ready to marshal response ...
	2025/11/23 09:00:33 Ready to write response ...
	2025/11/23 09:00:35 Ready to marshal response ...
	2025/11/23 09:00:35 Ready to write response ...
	2025/11/23 09:00:35 Ready to marshal response ...
	2025/11/23 09:00:35 Ready to write response ...
	2025/11/23 09:00:44 Ready to marshal response ...
	2025/11/23 09:00:44 Ready to write response ...
	2025/11/23 09:00:49 Ready to marshal response ...
	2025/11/23 09:00:49 Ready to write response ...
	2025/11/23 09:00:59 Ready to marshal response ...
	2025/11/23 09:00:59 Ready to write response ...
	2025/11/23 09:01:08 Ready to marshal response ...
	2025/11/23 09:01:08 Ready to write response ...
	2025/11/23 09:03:19 Ready to marshal response ...
	2025/11/23 09:03:19 Ready to write response ...
	
	
	==> kernel <==
	 09:03:21 up  1:45,  0 user,  load average: 0.50, 1.78, 2.80
	Linux addons-984173 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8] <==
	I1123 09:01:19.458865       1 main.go:301] handling current node
	I1123 09:01:29.459411       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:01:29.459539       1 main.go:301] handling current node
	I1123 09:01:39.458651       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:01:39.458684       1 main.go:301] handling current node
	I1123 09:01:49.458630       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:01:49.458662       1 main.go:301] handling current node
	I1123 09:01:59.458644       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:01:59.458677       1 main.go:301] handling current node
	I1123 09:02:09.460342       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:02:09.460464       1 main.go:301] handling current node
	I1123 09:02:19.461479       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:02:19.461515       1 main.go:301] handling current node
	I1123 09:02:29.459541       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:02:29.459649       1 main.go:301] handling current node
	I1123 09:02:39.460397       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:02:39.460433       1 main.go:301] handling current node
	I1123 09:02:49.460514       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:02:49.460550       1 main.go:301] handling current node
	I1123 09:02:59.459691       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:02:59.459725       1 main.go:301] handling current node
	I1123 09:03:09.463408       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:03:09.463447       1 main.go:301] handling current node
	I1123 09:03:19.460019       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:03:19.460152       1 main.go:301] handling current node
	
	
	==> kube-apiserver [126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1123 08:59:25.430483       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 08:59:25.430532       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1123 08:59:25.430545       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1123 08:59:25.430580       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 08:59:25.430635       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1123 08:59:25.431744       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1123 08:59:29.442023       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 08:59:29.442165       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 08:59:29.443943       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.217.66:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.217.66:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1123 08:59:29.530836       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1123 09:00:23.074640       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43458: use of closed network connection
	E1123 09:00:23.514952       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43504: use of closed network connection
	I1123 09:00:59.730349       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1123 09:01:00.083367       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.27.34"}
	I1123 09:01:01.863733       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1123 09:03:19.714121       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.141.223"}
	
	
	==> kube-controller-manager [22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca] <==
	I1123 08:58:27.280010       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:58:27.289532       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:58:27.291397       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:58:27.291423       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:58:27.291435       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:58:27.305524       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:58:27.306850       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-984173" podCIDRs=["10.244.0.0/24"]
	I1123 08:58:27.307846       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:58:27.307922       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:58:27.308431       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:58:27.310098       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:58:27.316282       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:58:27.318647       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:58:27.321372       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1123 08:58:32.788010       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1123 08:58:57.269888       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 08:58:57.270055       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1123 08:58:57.270106       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 08:58:57.307495       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1123 08:58:57.312040       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 08:58:57.371156       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:58:57.412595       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:59:12.265511       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1123 08:59:27.376824       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 08:59:27.425727       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c] <==
	I1123 08:58:29.300494       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:58:29.402130       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:58:29.503139       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:58:29.503177       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 08:58:29.503258       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:58:29.548128       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:58:29.548173       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:58:29.553177       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:58:29.553643       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:58:29.553657       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:58:29.558226       1 config.go:200] "Starting service config controller"
	I1123 08:58:29.558244       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:58:29.558279       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:58:29.558284       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:58:29.558296       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:58:29.558300       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:58:29.559074       1 config.go:309] "Starting node config controller"
	I1123 08:58:29.559081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:58:29.559089       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:58:29.658771       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:58:29.658841       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:58:29.659126       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c] <==
	E1123 08:58:20.385246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:58:20.385372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:58:20.385496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:58:20.385599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:58:20.385699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:58:20.388597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 08:58:20.389029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:58:20.389154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:58:20.389236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:58:20.389304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:58:20.389335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:58:20.389430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:58:20.389552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:58:20.389634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:58:21.206573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:58:21.240078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:58:21.240078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:58:21.454852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:58:21.462287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:58:21.493397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:58:21.534902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:58:21.556565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:58:21.605746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:58:21.623027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1123 08:58:21.942121       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:02:15 addons-984173 kubelet[1283]: E1123 09:02:15.320671    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-lxww8_kube-system(5a528301-690c-4034-989a-9dd8b4c6b876)\"" pod="kube-system/registry-creds-764b6fb674-lxww8" podUID="5a528301-690c-4034-989a-9dd8b4c6b876"
	Nov 23 09:02:25 addons-984173 kubelet[1283]: I1123 09:02:25.020688    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-brqdp" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:02:27 addons-984173 kubelet[1283]: I1123 09:02:27.020071    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-lxww8" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:02:27 addons-984173 kubelet[1283]: I1123 09:02:27.020570    1283 scope.go:117] "RemoveContainer" containerID="3cd176b45659703e342900d5c25f5a4e9e5f4bc67d1d43cf22df768775b16c09"
	Nov 23 09:02:27 addons-984173 kubelet[1283]: E1123 09:02:27.020789    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-lxww8_kube-system(5a528301-690c-4034-989a-9dd8b4c6b876)\"" pod="kube-system/registry-creds-764b6fb674-lxww8" podUID="5a528301-690c-4034-989a-9dd8b4c6b876"
	Nov 23 09:02:38 addons-984173 kubelet[1283]: I1123 09:02:38.019060    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-lxww8" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:02:38 addons-984173 kubelet[1283]: I1123 09:02:38.019136    1283 scope.go:117] "RemoveContainer" containerID="3cd176b45659703e342900d5c25f5a4e9e5f4bc67d1d43cf22df768775b16c09"
	Nov 23 09:02:38 addons-984173 kubelet[1283]: E1123 09:02:38.019320    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-lxww8_kube-system(5a528301-690c-4034-989a-9dd8b4c6b876)\"" pod="kube-system/registry-creds-764b6fb674-lxww8" podUID="5a528301-690c-4034-989a-9dd8b4c6b876"
	Nov 23 09:02:48 addons-984173 kubelet[1283]: I1123 09:02:48.019109    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xt9vl" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:02:49 addons-984173 kubelet[1283]: I1123 09:02:49.020016    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-lxww8" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:02:49 addons-984173 kubelet[1283]: I1123 09:02:49.020543    1283 scope.go:117] "RemoveContainer" containerID="3cd176b45659703e342900d5c25f5a4e9e5f4bc67d1d43cf22df768775b16c09"
	Nov 23 09:02:49 addons-984173 kubelet[1283]: E1123 09:02:49.021543    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-lxww8_kube-system(5a528301-690c-4034-989a-9dd8b4c6b876)\"" pod="kube-system/registry-creds-764b6fb674-lxww8" podUID="5a528301-690c-4034-989a-9dd8b4c6b876"
	Nov 23 09:03:02 addons-984173 kubelet[1283]: I1123 09:03:02.019107    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-lxww8" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:03:02 addons-984173 kubelet[1283]: I1123 09:03:02.019177    1283 scope.go:117] "RemoveContainer" containerID="3cd176b45659703e342900d5c25f5a4e9e5f4bc67d1d43cf22df768775b16c09"
	Nov 23 09:03:02 addons-984173 kubelet[1283]: I1123 09:03:02.499797    1283 scope.go:117] "RemoveContainer" containerID="3cd176b45659703e342900d5c25f5a4e9e5f4bc67d1d43cf22df768775b16c09"
	Nov 23 09:03:02 addons-984173 kubelet[1283]: I1123 09:03:02.500115    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-lxww8" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:03:02 addons-984173 kubelet[1283]: I1123 09:03:02.500160    1283 scope.go:117] "RemoveContainer" containerID="7d99fc289a2be961a70acb1bb5c8328d8dbc224b0625ea18e6aab5434b5fb3bc"
	Nov 23 09:03:02 addons-984173 kubelet[1283]: E1123 09:03:02.500305    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-lxww8_kube-system(5a528301-690c-4034-989a-9dd8b4c6b876)\"" pod="kube-system/registry-creds-764b6fb674-lxww8" podUID="5a528301-690c-4034-989a-9dd8b4c6b876"
	Nov 23 09:03:16 addons-984173 kubelet[1283]: I1123 09:03:16.019849    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-lxww8" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:03:16 addons-984173 kubelet[1283]: I1123 09:03:16.020408    1283 scope.go:117] "RemoveContainer" containerID="7d99fc289a2be961a70acb1bb5c8328d8dbc224b0625ea18e6aab5434b5fb3bc"
	Nov 23 09:03:16 addons-984173 kubelet[1283]: E1123 09:03:16.020690    1283 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-lxww8_kube-system(5a528301-690c-4034-989a-9dd8b4c6b876)\"" pod="kube-system/registry-creds-764b6fb674-lxww8" podUID="5a528301-690c-4034-989a-9dd8b4c6b876"
	Nov 23 09:03:17 addons-984173 kubelet[1283]: I1123 09:03:17.020226    1283 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-r7jl6" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 09:03:19 addons-984173 kubelet[1283]: I1123 09:03:19.638701    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf6lf\" (UniqueName: \"kubernetes.io/projected/0ee4ce29-1e17-40b9-afb0-087ca8a79816-kube-api-access-xf6lf\") pod \"hello-world-app-5d498dc89-sk2ww\" (UID: \"0ee4ce29-1e17-40b9-afb0-087ca8a79816\") " pod="default/hello-world-app-5d498dc89-sk2ww"
	Nov 23 09:03:19 addons-984173 kubelet[1283]: I1123 09:03:19.638775    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0ee4ce29-1e17-40b9-afb0-087ca8a79816-gcp-creds\") pod \"hello-world-app-5d498dc89-sk2ww\" (UID: \"0ee4ce29-1e17-40b9-afb0-087ca8a79816\") " pod="default/hello-world-app-5d498dc89-sk2ww"
	Nov 23 09:03:19 addons-984173 kubelet[1283]: W1123 09:03:19.885664    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c/crio-42a8cd24ee760eb110df8bbe2be3ba854522d5002e765d4097855457baf24f85 WatchSource:0}: Error finding container 42a8cd24ee760eb110df8bbe2be3ba854522d5002e765d4097855457baf24f85: Status 404 returned error can't find the container with id 42a8cd24ee760eb110df8bbe2be3ba854522d5002e765d4097855457baf24f85
	
	
	==> storage-provisioner [de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74] <==
	W1123 09:02:55.939227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:02:57.942735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:02:57.947441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:02:59.950259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:02:59.958231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:01.961359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:01.965709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:03.969102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:03.973444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:05.976487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:05.981265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:07.984189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:07.990834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:09.994270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:10.011928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:12.016213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:12.021381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:14.025262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:14.032429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:16.037341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:16.043346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:18.048024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:18.053844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:20.065707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:20.071627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-984173 -n addons-984173
helpers_test.go:269: (dbg) Run:  kubectl --context addons-984173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-t4d4b ingress-nginx-admission-patch-dhzqh
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-984173 describe pod ingress-nginx-admission-create-t4d4b ingress-nginx-admission-patch-dhzqh
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-984173 describe pod ingress-nginx-admission-create-t4d4b ingress-nginx-admission-patch-dhzqh: exit status 1 (88.277327ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-t4d4b" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dhzqh" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-984173 describe pod ingress-nginx-admission-create-t4d4b ingress-nginx-admission-patch-dhzqh: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (344.63617ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:03:22.955749  295223 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:03:22.956787  295223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:03:22.956844  295223 out.go:374] Setting ErrFile to fd 2...
	I1123 09:03:22.956866  295223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:03:22.957201  295223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:03:22.957633  295223 mustload.go:66] Loading cluster: addons-984173
	I1123 09:03:22.958120  295223 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:03:22.958166  295223 addons.go:622] checking whether the cluster is paused
	I1123 09:03:22.958326  295223 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:03:22.958357  295223 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:03:22.958975  295223 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:03:22.979063  295223 ssh_runner.go:195] Run: systemctl --version
	I1123 09:03:22.979124  295223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:03:22.996662  295223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:03:23.120683  295223 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:03:23.120768  295223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:03:23.184397  295223 cri.go:89] found id: "7d99fc289a2be961a70acb1bb5c8328d8dbc224b0625ea18e6aab5434b5fb3bc"
	I1123 09:03:23.184416  295223 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:03:23.184430  295223 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:03:23.184434  295223 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:03:23.184437  295223 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:03:23.184441  295223 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:03:23.184444  295223 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:03:23.184447  295223 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:03:23.184450  295223 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:03:23.184456  295223 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:03:23.184459  295223 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:03:23.184462  295223 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:03:23.184465  295223 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:03:23.184468  295223 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:03:23.184471  295223 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:03:23.184476  295223 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:03:23.184479  295223 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:03:23.184483  295223 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:03:23.184486  295223 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:03:23.184488  295223 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:03:23.184493  295223 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:03:23.184495  295223 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:03:23.184498  295223 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:03:23.184502  295223 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:03:23.184504  295223 cri.go:89] found id: ""
	I1123 09:03:23.184554  295223 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:03:23.208483  295223 out.go:203] 
	W1123 09:03:23.211439  295223 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:03:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:03:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:03:23.211469  295223 out.go:285] * 
	* 
	W1123 09:03:23.218229  295223 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:03:23.221143  295223 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable ingress --alsologtostderr -v=1: exit status 11 (278.867632ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:03:23.290174  295345 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:03:23.290978  295345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:03:23.290995  295345 out.go:374] Setting ErrFile to fd 2...
	I1123 09:03:23.291002  295345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:03:23.291353  295345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:03:23.291734  295345 mustload.go:66] Loading cluster: addons-984173
	I1123 09:03:23.292175  295345 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:03:23.292198  295345 addons.go:622] checking whether the cluster is paused
	I1123 09:03:23.292310  295345 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:03:23.292323  295345 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:03:23.292851  295345 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:03:23.315597  295345 ssh_runner.go:195] Run: systemctl --version
	I1123 09:03:23.315662  295345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:03:23.333757  295345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:03:23.440198  295345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:03:23.440337  295345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:03:23.470340  295345 cri.go:89] found id: "7d99fc289a2be961a70acb1bb5c8328d8dbc224b0625ea18e6aab5434b5fb3bc"
	I1123 09:03:23.470376  295345 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:03:23.470381  295345 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:03:23.470386  295345 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:03:23.470389  295345 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:03:23.470393  295345 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:03:23.470396  295345 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:03:23.470416  295345 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:03:23.470425  295345 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:03:23.470432  295345 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:03:23.470435  295345 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:03:23.470438  295345 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:03:23.470442  295345 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:03:23.470445  295345 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:03:23.470449  295345 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:03:23.470458  295345 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:03:23.470462  295345 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:03:23.470471  295345 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:03:23.470474  295345 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:03:23.470478  295345 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:03:23.470496  295345 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:03:23.470507  295345 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:03:23.470510  295345 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:03:23.470513  295345 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:03:23.470517  295345 cri.go:89] found id: ""
	I1123 09:03:23.470580  295345 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:03:23.486115  295345 out.go:203] 
	W1123 09:03:23.489173  295345 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:03:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:03:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:03:23.489199  295345 out.go:285] * 
	* 
	W1123 09:03:23.495703  295345 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:03:23.498822  295345 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.08s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-7lvml" [00db4778-85c0-4524-bd3b-73fb9d5ebb71] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004121062s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (271.8449ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:00:59.207446  293294 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:59.208328  293294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:59.208348  293294 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:59.208362  293294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:59.208643  293294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:00:59.208992  293294 mustload.go:66] Loading cluster: addons-984173
	I1123 09:00:59.209436  293294 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:59.209459  293294 addons.go:622] checking whether the cluster is paused
	I1123 09:00:59.209575  293294 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:59.209591  293294 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:00:59.210179  293294 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:00:59.227768  293294 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:59.227822  293294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:00:59.246828  293294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:00:59.352060  293294 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:00:59.352162  293294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:00:59.390748  293294 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:00:59.390770  293294 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:00:59.390789  293294 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:00:59.390794  293294 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:00:59.390797  293294 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:00:59.390825  293294 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:00:59.390829  293294 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:00:59.390833  293294 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:00:59.390837  293294 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:00:59.390848  293294 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:00:59.390852  293294 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:00:59.390856  293294 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:00:59.390859  293294 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:00:59.390874  293294 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:00:59.390878  293294 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:00:59.390897  293294 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:00:59.390901  293294 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:00:59.390906  293294 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:00:59.390909  293294 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:00:59.390912  293294 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:00:59.390917  293294 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:00:59.390920  293294 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:00:59.390927  293294 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:00:59.390930  293294 cri.go:89] found id: ""
	I1123 09:00:59.390986  293294 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:59.406034  293294 out.go:203] 
	W1123 09:00:59.409037  293294 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:00:59.409073  293294 out.go:285] * 
	* 
	W1123 09:00:59.415583  293294 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:00:59.418553  293294 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.251737ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-q7k2v" [10e46f11-8afd-4338-abf6-90235104b38c] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004549604s
addons_test.go:463: (dbg) Run:  kubectl --context addons-984173 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (347.965108ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:00:52.878564  293126 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:52.879316  293126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:52.879327  293126 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:52.879333  293126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:52.879598  293126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:00:52.879907  293126 mustload.go:66] Loading cluster: addons-984173
	I1123 09:00:52.880323  293126 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:52.880335  293126 addons.go:622] checking whether the cluster is paused
	I1123 09:00:52.880440  293126 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:52.880451  293126 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:00:52.880943  293126 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:00:52.898423  293126 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:52.898483  293126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:00:52.916250  293126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:00:53.031344  293126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:00:53.031443  293126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:00:53.102402  293126 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:00:53.102426  293126 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:00:53.102431  293126 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:00:53.102435  293126 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:00:53.102438  293126 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:00:53.102443  293126 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:00:53.102446  293126 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:00:53.102450  293126 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:00:53.102453  293126 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:00:53.102460  293126 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:00:53.102463  293126 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:00:53.102466  293126 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:00:53.102469  293126 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:00:53.102472  293126 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:00:53.102476  293126 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:00:53.102484  293126 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:00:53.102488  293126 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:00:53.102492  293126 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:00:53.102495  293126 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:00:53.102498  293126 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:00:53.102502  293126 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:00:53.102505  293126 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:00:53.102509  293126 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:00:53.102519  293126 cri.go:89] found id: ""
	I1123 09:00:53.102567  293126 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:53.124751  293126 out.go:203] 
	W1123 09:00:53.128361  293126 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:00:53.128393  293126 out.go:285] * 
	* 
	W1123 09:00:53.134816  293126 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:00:53.140119  293126 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.48s)

                                                
                                    
x
+
TestAddons/parallel/CSI (32.14s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1123 09:00:45.200549  284904 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1123 09:00:45.206426  284904 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 09:00:45.206482  284904 kapi.go:107] duration metric: took 5.941513ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.96517ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-984173 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-984173 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [1ad5c22d-64cd-4034-85c7-660a3f29bc4f] Pending
helpers_test.go:352: "task-pv-pod" [1ad5c22d-64cd-4034-85c7-660a3f29bc4f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [1ad5c22d-64cd-4034-85c7-660a3f29bc4f] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003989657s
addons_test.go:572: (dbg) Run:  kubectl --context addons-984173 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-984173 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-984173 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-984173 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-984173 delete pod task-pv-pod: (1.082515971s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-984173 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-984173 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-984173 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [399ea634-feed-4391-9a9d-88e66e30809a] Pending
helpers_test.go:352: "task-pv-pod-restore" [399ea634-feed-4391-9a9d-88e66e30809a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [399ea634-feed-4391-9a9d-88e66e30809a] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003311382s
addons_test.go:614: (dbg) Run:  kubectl --context addons-984173 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-984173 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-984173 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (281.840355ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:01:16.834797  293947 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:01:16.835541  293947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:01:16.835555  293947 out.go:374] Setting ErrFile to fd 2...
	I1123 09:01:16.835561  293947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:01:16.836548  293947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:01:16.836876  293947 mustload.go:66] Loading cluster: addons-984173
	I1123 09:01:16.837284  293947 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:01:16.837302  293947 addons.go:622] checking whether the cluster is paused
	I1123 09:01:16.837449  293947 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:01:16.837465  293947 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:01:16.838004  293947 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:01:16.855701  293947 ssh_runner.go:195] Run: systemctl --version
	I1123 09:01:16.855758  293947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:01:16.873675  293947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:01:16.984258  293947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:01:16.984344  293947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:01:17.026054  293947 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:01:17.026078  293947 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:01:17.026083  293947 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:01:17.026097  293947 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:01:17.026101  293947 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:01:17.026126  293947 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:01:17.026136  293947 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:01:17.026139  293947 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:01:17.026142  293947 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:01:17.026150  293947 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:01:17.026160  293947 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:01:17.026164  293947 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:01:17.026168  293947 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:01:17.026171  293947 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:01:17.026175  293947 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:01:17.026202  293947 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:01:17.026211  293947 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:01:17.026234  293947 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:01:17.026242  293947 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:01:17.026245  293947 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:01:17.026250  293947 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:01:17.026258  293947 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:01:17.026262  293947 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:01:17.026267  293947 cri.go:89] found id: ""
	I1123 09:01:17.026329  293947 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:01:17.042816  293947 out.go:203] 
	W1123 09:01:17.045717  293947 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:01:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:01:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:01:17.045757  293947 out.go:285] * 
	* 
	W1123 09:01:17.052127  293947 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:01:17.055214  293947 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (275.430004ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:01:17.119957  293991 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:01:17.120729  293991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:01:17.120744  293991 out.go:374] Setting ErrFile to fd 2...
	I1123 09:01:17.120750  293991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:01:17.121003  293991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:01:17.121294  293991 mustload.go:66] Loading cluster: addons-984173
	I1123 09:01:17.121709  293991 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:01:17.121729  293991 addons.go:622] checking whether the cluster is paused
	I1123 09:01:17.121886  293991 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:01:17.121902  293991 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:01:17.122459  293991 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:01:17.141254  293991 ssh_runner.go:195] Run: systemctl --version
	I1123 09:01:17.141316  293991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:01:17.161068  293991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:01:17.268509  293991 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:01:17.268615  293991 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:01:17.302402  293991 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:01:17.302469  293991 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:01:17.302488  293991 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:01:17.302509  293991 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:01:17.302530  293991 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:01:17.302565  293991 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:01:17.302583  293991 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:01:17.302603  293991 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:01:17.302625  293991 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:01:17.302655  293991 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:01:17.302686  293991 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:01:17.302705  293991 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:01:17.302725  293991 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:01:17.302757  293991 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:01:17.302777  293991 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:01:17.302801  293991 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:01:17.302828  293991 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:01:17.302862  293991 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:01:17.302888  293991 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:01:17.302907  293991 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:01:17.302947  293991 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:01:17.302973  293991 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:01:17.302992  293991 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:01:17.303012  293991 cri.go:89] found id: ""
	I1123 09:01:17.303093  293991 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:01:17.317571  293991 out.go:203] 
	W1123 09:01:17.320468  293991 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:01:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:01:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:01:17.320492  293991 out.go:285] * 
	* 
	W1123 09:01:17.326817  293991 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:01:17.329807  293991 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (32.14s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-984173 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-984173 --alsologtostderr -v=1: exit status 11 (330.657039ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:00:43.806536  292332 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:43.807705  292332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:43.807726  292332 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:43.807733  292332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:43.808004  292332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:00:43.808281  292332 mustload.go:66] Loading cluster: addons-984173
	I1123 09:00:43.808665  292332 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:43.808683  292332 addons.go:622] checking whether the cluster is paused
	I1123 09:00:43.808791  292332 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:43.808807  292332 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:00:43.809398  292332 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:00:43.834867  292332 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:43.834936  292332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:00:43.862177  292332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:00:43.984715  292332 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:00:43.984789  292332 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:00:44.026364  292332 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:00:44.026384  292332 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:00:44.026389  292332 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:00:44.026393  292332 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:00:44.026396  292332 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:00:44.026400  292332 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:00:44.026403  292332 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:00:44.026406  292332 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:00:44.026410  292332 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:00:44.026416  292332 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:00:44.026419  292332 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:00:44.026422  292332 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:00:44.026425  292332 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:00:44.026428  292332 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:00:44.026432  292332 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:00:44.026441  292332 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:00:44.026445  292332 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:00:44.026449  292332 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:00:44.026452  292332 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:00:44.026455  292332 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:00:44.026460  292332 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:00:44.026463  292332 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:00:44.026466  292332 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:00:44.026469  292332 cri.go:89] found id: ""
	I1123 09:00:44.026518  292332 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:44.046637  292332 out.go:203] 
	W1123 09:00:44.049775  292332 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:00:44.049802  292332 out.go:285] * 
	* 
	W1123 09:00:44.059485  292332 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:00:44.062698  292332 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-984173 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-984173
helpers_test.go:243: (dbg) docker inspect addons-984173:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c",
	        "Created": "2025-11-23T08:57:57.310659194Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286067,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:57:57.383407496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c/hostname",
	        "HostsPath": "/var/lib/docker/containers/733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c/hosts",
	        "LogPath": "/var/lib/docker/containers/733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c/733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c-json.log",
	        "Name": "/addons-984173",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-984173:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-984173",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c",
	                "LowerDir": "/var/lib/docker/overlay2/c4e7a78fff0aea01be9146e8d2d65b224cce7cc0559b669da545caca15ec8f4b-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c4e7a78fff0aea01be9146e8d2d65b224cce7cc0559b669da545caca15ec8f4b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c4e7a78fff0aea01be9146e8d2d65b224cce7cc0559b669da545caca15ec8f4b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c4e7a78fff0aea01be9146e8d2d65b224cce7cc0559b669da545caca15ec8f4b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-984173",
	                "Source": "/var/lib/docker/volumes/addons-984173/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-984173",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-984173",
	                "name.minikube.sigs.k8s.io": "addons-984173",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e96523847ea90da0f314badfbe09f857cd905e9b110f4ad5c2cc3e84f3a93afa",
	            "SandboxKey": "/var/run/docker/netns/e96523847ea9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-984173": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:ea:74:47:83:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c5904d4a6f6f7099b2f69579ce698bfce4b7f9c7f43969a8d6c2e1da088445cb",
	                    "EndpointID": "44478c4b734361e96f3844242dd897b9df5f9c033de95bdb6d9525ca1c7409ea",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-984173",
	                        "733ef088474c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-984173 -n addons-984173
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-984173 logs -n 25: (1.921830582s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-447664 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-447664   │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ delete  │ -p download-only-447664                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-447664   │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ start   │ -o=json --download-only -p download-only-986034 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-986034   │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ delete  │ -p download-only-986034                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-986034   │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ delete  │ -p download-only-447664                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-447664   │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ delete  │ -p download-only-986034                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-986034   │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ start   │ --download-only -p download-docker-864519 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-864519 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ delete  │ -p download-docker-864519                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-864519 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ start   │ --download-only -p binary-mirror-135438 --alsologtostderr --binary-mirror http://127.0.0.1:42341 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-135438   │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ delete  │ -p binary-mirror-135438                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-135438   │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable dashboard -p addons-984173                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ addons  │ disable dashboard -p addons-984173                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ start   │ -p addons-984173 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 09:00 UTC │
	│ addons  │ addons-984173 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ addons  │ addons-984173 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ addons  │ addons-984173 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ addons  │ addons-984173 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ ip      │ addons-984173 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ addons  │ addons-984173 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ addons  │ addons-984173 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ addons  │ enable headlamp -p addons-984173 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ ssh     │ addons-984173 ssh cat /opt/local-path-provisioner/pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-984173          │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:57:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:57:33.078797  285663 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:57:33.078919  285663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:57:33.078931  285663 out.go:374] Setting ErrFile to fd 2...
	I1123 08:57:33.078942  285663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:57:33.079589  285663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 08:57:33.080160  285663 out.go:368] Setting JSON to false
	I1123 08:57:33.081017  285663 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6002,"bootTime":1763882251,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:57:33.081112  285663 start.go:143] virtualization:  
	I1123 08:57:33.084407  285663 out.go:179] * [addons-984173] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:57:33.088182  285663 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:57:33.088263  285663 notify.go:221] Checking for updates...
	I1123 08:57:33.093915  285663 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:57:33.096903  285663 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 08:57:33.099733  285663 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 08:57:33.102573  285663 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:57:33.105358  285663 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:57:33.108485  285663 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:57:33.133953  285663 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:57:33.134088  285663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:57:33.194605  285663 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 08:57:33.186080456 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:57:33.194704  285663 docker.go:319] overlay module found
	I1123 08:57:33.197798  285663 out.go:179] * Using the docker driver based on user configuration
	I1123 08:57:33.200566  285663 start.go:309] selected driver: docker
	I1123 08:57:33.200587  285663 start.go:927] validating driver "docker" against <nil>
	I1123 08:57:33.200602  285663 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:57:33.201305  285663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:57:33.253342  285663 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 08:57:33.244585815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:57:33.253522  285663 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:57:33.253760  285663 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:57:33.256696  285663 out.go:179] * Using Docker driver with root privileges
	I1123 08:57:33.259533  285663 cni.go:84] Creating CNI manager for ""
	I1123 08:57:33.259602  285663 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:57:33.259616  285663 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:57:33.259693  285663 start.go:353] cluster config:
	{Name:addons-984173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-984173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1123 08:57:33.262749  285663 out.go:179] * Starting "addons-984173" primary control-plane node in "addons-984173" cluster
	I1123 08:57:33.265444  285663 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:57:33.268252  285663 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:57:33.271034  285663 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:57:33.271079  285663 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 08:57:33.271094  285663 cache.go:65] Caching tarball of preloaded images
	I1123 08:57:33.271108  285663 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:57:33.271184  285663 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 08:57:33.271195  285663 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:57:33.271561  285663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/config.json ...
	I1123 08:57:33.271594  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/config.json: {Name:mk7616ad40d907a35dda8e69123013a3c465e5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:33.286349  285663 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:57:33.286492  285663 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 08:57:33.286530  285663 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 08:57:33.286539  285663 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 08:57:33.286546  285663 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 08:57:33.286551  285663 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1123 08:57:51.152029  285663 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1123 08:57:51.152071  285663 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:57:51.152116  285663 start.go:360] acquireMachinesLock for addons-984173: {Name:mkae3618c5c75bc99801f8654bd1771081e55a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:57:51.152866  285663 start.go:364] duration metric: took 722.395µs to acquireMachinesLock for "addons-984173"
	I1123 08:57:51.152908  285663 start.go:93] Provisioning new machine with config: &{Name:addons-984173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-984173 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:57:51.152989  285663 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:57:51.156395  285663 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1123 08:57:51.156685  285663 start.go:159] libmachine.API.Create for "addons-984173" (driver="docker")
	I1123 08:57:51.156734  285663 client.go:173] LocalClient.Create starting
	I1123 08:57:51.156869  285663 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem
	I1123 08:57:51.477283  285663 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem
	I1123 08:57:51.690063  285663 cli_runner.go:164] Run: docker network inspect addons-984173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:57:51.705839  285663 cli_runner.go:211] docker network inspect addons-984173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:57:51.705918  285663 network_create.go:284] running [docker network inspect addons-984173] to gather additional debugging logs...
	I1123 08:57:51.705937  285663 cli_runner.go:164] Run: docker network inspect addons-984173
	W1123 08:57:51.721336  285663 cli_runner.go:211] docker network inspect addons-984173 returned with exit code 1
	I1123 08:57:51.721382  285663 network_create.go:287] error running [docker network inspect addons-984173]: docker network inspect addons-984173: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-984173 not found
	I1123 08:57:51.721395  285663 network_create.go:289] output of [docker network inspect addons-984173]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-984173 not found
	
	** /stderr **
	I1123 08:57:51.721529  285663 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:57:51.737377  285663 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b0e790}
	I1123 08:57:51.737488  285663 network_create.go:124] attempt to create docker network addons-984173 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1123 08:57:51.737544  285663 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-984173 addons-984173
	I1123 08:57:51.808983  285663 network_create.go:108] docker network addons-984173 192.168.49.0/24 created
	I1123 08:57:51.809016  285663 kic.go:121] calculated static IP "192.168.49.2" for the "addons-984173" container
	I1123 08:57:51.809103  285663 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:57:51.828920  285663 cli_runner.go:164] Run: docker volume create addons-984173 --label name.minikube.sigs.k8s.io=addons-984173 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:57:51.846845  285663 oci.go:103] Successfully created a docker volume addons-984173
	I1123 08:57:51.846938  285663 cli_runner.go:164] Run: docker run --rm --name addons-984173-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-984173 --entrypoint /usr/bin/test -v addons-984173:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:57:52.817450  285663 oci.go:107] Successfully prepared a docker volume addons-984173
	I1123 08:57:52.817516  285663 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:57:52.817533  285663 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:57:52.817593  285663 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-984173:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:57:57.233709  285663 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-984173:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.416059913s)
	I1123 08:57:57.233744  285663 kic.go:203] duration metric: took 4.416208969s to extract preloaded images to volume ...
	W1123 08:57:57.233877  285663 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:57:57.233986  285663 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:57:57.296085  285663 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-984173 --name addons-984173 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-984173 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-984173 --network addons-984173 --ip 192.168.49.2 --volume addons-984173:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:57:57.598227  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Running}}
	I1123 08:57:57.617756  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:57:57.635904  285663 cli_runner.go:164] Run: docker exec addons-984173 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:57:57.684311  285663 oci.go:144] the created container "addons-984173" has a running status.
	I1123 08:57:57.684338  285663 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa...
	I1123 08:57:57.833286  285663 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:57:57.854677  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:57:57.877809  285663 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:57:57.877828  285663 kic_runner.go:114] Args: [docker exec --privileged addons-984173 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:57:57.948042  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:57:57.971140  285663 machine.go:94] provisionDockerMachine start ...
	I1123 08:57:57.971238  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:57:58.000953  285663 main.go:143] libmachine: Using SSH client type: native
	I1123 08:57:58.001315  285663 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1123 08:57:58.001333  285663 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:57:58.002321  285663 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:58:01.153621  285663 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-984173
	
	I1123 08:58:01.153650  285663 ubuntu.go:182] provisioning hostname "addons-984173"
	I1123 08:58:01.153730  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:01.173486  285663 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:01.173810  285663 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1123 08:58:01.173821  285663 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-984173 && echo "addons-984173" | sudo tee /etc/hostname
	I1123 08:58:01.334933  285663 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-984173
	
	I1123 08:58:01.335028  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:01.351920  285663 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:01.352257  285663 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1123 08:58:01.352280  285663 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-984173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-984173/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-984173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:58:01.505572  285663 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:58:01.505597  285663 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 08:58:01.505623  285663 ubuntu.go:190] setting up certificates
	I1123 08:58:01.505633  285663 provision.go:84] configureAuth start
	I1123 08:58:01.505695  285663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-984173
	I1123 08:58:01.522222  285663 provision.go:143] copyHostCerts
	I1123 08:58:01.522309  285663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 08:58:01.522441  285663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 08:58:01.522505  285663 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 08:58:01.522565  285663 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.addons-984173 san=[127.0.0.1 192.168.49.2 addons-984173 localhost minikube]
	I1123 08:58:01.678063  285663 provision.go:177] copyRemoteCerts
	I1123 08:58:01.678137  285663 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:58:01.678185  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:01.703822  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:01.813455  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:58:01.831085  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:58:01.848085  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 08:58:01.865485  285663 provision.go:87] duration metric: took 359.826729ms to configureAuth
	I1123 08:58:01.865559  285663 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:58:01.865799  285663 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:01.865933  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:01.882699  285663 main.go:143] libmachine: Using SSH client type: native
	I1123 08:58:01.883020  285663 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1123 08:58:01.883044  285663 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:58:02.177564  285663 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:58:02.177584  285663 machine.go:97] duration metric: took 4.206426115s to provisionDockerMachine
	I1123 08:58:02.177595  285663 client.go:176] duration metric: took 11.020851334s to LocalClient.Create
	I1123 08:58:02.177607  285663 start.go:167] duration metric: took 11.020926329s to libmachine.API.Create "addons-984173"
	I1123 08:58:02.177615  285663 start.go:293] postStartSetup for "addons-984173" (driver="docker")
	I1123 08:58:02.177625  285663 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:58:02.177706  285663 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:58:02.177757  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:02.194870  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:02.301273  285663 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:58:02.304568  285663 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:58:02.304604  285663 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:58:02.304616  285663 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 08:58:02.304684  285663 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 08:58:02.304711  285663 start.go:296] duration metric: took 127.089168ms for postStartSetup
	I1123 08:58:02.305028  285663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-984173
	I1123 08:58:02.321849  285663 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/config.json ...
	I1123 08:58:02.322157  285663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:58:02.322207  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:02.338705  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:02.438210  285663 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:58:02.442777  285663 start.go:128] duration metric: took 11.289772451s to createHost
	I1123 08:58:02.442804  285663 start.go:83] releasing machines lock for "addons-984173", held for 11.289918076s
	I1123 08:58:02.442872  285663 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-984173
	I1123 08:58:02.459980  285663 ssh_runner.go:195] Run: cat /version.json
	I1123 08:58:02.460000  285663 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:58:02.460028  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:02.460063  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:02.482905  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:02.502653  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:02.680113  285663 ssh_runner.go:195] Run: systemctl --version
	I1123 08:58:02.686449  285663 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:58:02.721441  285663 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:58:02.725741  285663 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:58:02.725811  285663 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:58:02.753761  285663 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:58:02.753787  285663 start.go:496] detecting cgroup driver to use...
	I1123 08:58:02.753820  285663 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:58:02.753871  285663 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:58:02.771717  285663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:58:02.784302  285663 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:58:02.784365  285663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:58:02.802129  285663 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:58:02.820708  285663 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:58:02.942746  285663 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:58:03.074227  285663 docker.go:234] disabling docker service ...
	I1123 08:58:03.074295  285663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:58:03.095266  285663 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:58:03.108522  285663 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:58:03.233781  285663 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:58:03.356432  285663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:58:03.370046  285663 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:58:03.383367  285663 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:58:03.383468  285663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.391837  285663 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 08:58:03.391927  285663 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.400908  285663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.409441  285663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.418212  285663 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:58:03.426437  285663 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.434935  285663 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.447797  285663 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:58:03.456620  285663 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:58:03.463992  285663 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:58:03.471293  285663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:03.584135  285663 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:58:03.773705  285663 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:58:03.773806  285663 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:58:03.777765  285663 start.go:564] Will wait 60s for crictl version
	I1123 08:58:03.777831  285663 ssh_runner.go:195] Run: which crictl
	I1123 08:58:03.781331  285663 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:58:03.807062  285663 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:58:03.807243  285663 ssh_runner.go:195] Run: crio --version
	I1123 08:58:03.834377  285663 ssh_runner.go:195] Run: crio --version
	I1123 08:58:03.862971  285663 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:58:03.865789  285663 cli_runner.go:164] Run: docker network inspect addons-984173 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:58:03.881968  285663 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 08:58:03.885818  285663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:58:03.895609  285663 kubeadm.go:884] updating cluster {Name:addons-984173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-984173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:58:03.895724  285663 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:58:03.895785  285663 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:58:03.928533  285663 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:58:03.928555  285663 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:58:03.928613  285663 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:58:03.954231  285663 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:58:03.954254  285663 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:58:03.954262  285663 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 08:58:03.954364  285663 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-984173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-984173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:58:03.954443  285663 ssh_runner.go:195] Run: crio config
	I1123 08:58:04.026096  285663 cni.go:84] Creating CNI manager for ""
	I1123 08:58:04.026142  285663 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:58:04.026167  285663 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:58:04.026192  285663 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-984173 NodeName:addons-984173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:58:04.026320  285663 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-984173"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:58:04.026393  285663 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:58:04.034005  285663 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:58:04.034075  285663 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:58:04.041525  285663 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 08:58:04.053814  285663 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:58:04.066362  285663 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1123 08:58:04.078686  285663 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:58:04.082192  285663 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:58:04.091435  285663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:04.205748  285663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:58:04.222941  285663 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173 for IP: 192.168.49.2
	I1123 08:58:04.222963  285663 certs.go:195] generating shared ca certs ...
	I1123 08:58:04.222978  285663 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.223173  285663 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 08:58:04.326775  285663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt ...
	I1123 08:58:04.326809  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt: {Name:mk7b2cb380eb2c6d9b4c557b53e038640e948f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.327661  285663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key ...
	I1123 08:58:04.327678  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key: {Name:mka195bd406baa7297b08ee2229e68eb23e70ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.327767  285663 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 08:58:04.400853  285663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt ...
	I1123 08:58:04.400881  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt: {Name:mka3614bd2fc07777b02ee7c7a59e444e85c8007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.401044  285663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key ...
	I1123 08:58:04.401056  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key: {Name:mk1d754cf0fedaac87b2d7052e74b68fdf7d3925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.401137  285663 certs.go:257] generating profile certs ...
	I1123 08:58:04.401197  285663 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.key
	I1123 08:58:04.401213  285663 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt with IP's: []
	I1123 08:58:04.515463  285663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt ...
	I1123 08:58:04.515503  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: {Name:mk0cb577cba32d0ba0e8ed99eb58ab8036539ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.515733  285663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.key ...
	I1123 08:58:04.515752  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.key: {Name:mk4ee1cf36ada24c2eccc2269ed0c5e100c87767 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.516504  285663 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.key.05f7d282
	I1123 08:58:04.516529  285663 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.crt.05f7d282 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1123 08:58:04.663924  285663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.crt.05f7d282 ...
	I1123 08:58:04.663956  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.crt.05f7d282: {Name:mk29ec3c06ffa94819688b6c04a4da23123ccd54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.664139  285663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.key.05f7d282 ...
	I1123 08:58:04.664153  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.key.05f7d282: {Name:mkc23d3e4fba0a94d7fbb37262ce3a6a61cad94b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.664249  285663 certs.go:382] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.crt.05f7d282 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.crt
	I1123 08:58:04.664326  285663 certs.go:386] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.key.05f7d282 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.key
	I1123 08:58:04.664396  285663 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.key
	I1123 08:58:04.664415  285663 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.crt with IP's: []
	I1123 08:58:04.852251  285663 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.crt ...
	I1123 08:58:04.852281  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.crt: {Name:mk731cf009e1fac35f29d2f20663f6f28ce6a2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.852458  285663 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.key ...
	I1123 08:58:04.852472  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.key: {Name:mkf308a93be8ab758fe161e4dfbaa4620498ab19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:04.853227  285663 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:58:04.853276  285663 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:58:04.853310  285663 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:58:04.853342  285663 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 08:58:04.853963  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:58:04.871661  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:58:04.890152  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:58:04.909743  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:58:04.928232  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 08:58:04.946069  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:58:04.964103  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:58:04.981854  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:58:05.002214  285663 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:58:05.022167  285663 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:58:05.036026  285663 ssh_runner.go:195] Run: openssl version
	I1123 08:58:05.042440  285663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:58:05.051057  285663 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:58:05.054975  285663 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:58:05.055043  285663 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:58:05.097000  285663 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:58:05.106198  285663 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:58:05.111018  285663 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:58:05.111123  285663 kubeadm.go:401] StartCluster: {Name:addons-984173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-984173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:58:05.111224  285663 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:58:05.111301  285663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:58:05.140676  285663 cri.go:89] found id: ""
	I1123 08:58:05.140795  285663 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:58:05.150943  285663 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:58:05.159253  285663 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:58:05.159348  285663 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:58:05.167358  285663 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:58:05.167379  285663 kubeadm.go:158] found existing configuration files:
	
	I1123 08:58:05.167453  285663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:58:05.175361  285663 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:58:05.175425  285663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:58:05.182863  285663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:58:05.190546  285663 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:58:05.190613  285663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:58:05.198229  285663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:58:05.205647  285663 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:58:05.205712  285663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:58:05.212960  285663 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:58:05.220411  285663 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:58:05.220488  285663 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:58:05.228126  285663 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:58:05.292752  285663 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 08:58:05.293041  285663 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:58:05.362332  285663 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:58:23.733845  285663 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:58:23.733904  285663 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:58:23.734011  285663 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:58:23.734086  285663 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:58:23.734134  285663 kubeadm.go:319] OS: Linux
	I1123 08:58:23.734184  285663 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:58:23.734235  285663 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:58:23.734287  285663 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:58:23.734335  285663 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:58:23.734386  285663 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:58:23.734442  285663 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:58:23.734491  285663 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:58:23.734543  285663 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:58:23.734592  285663 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:58:23.734667  285663 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:58:23.734771  285663 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:58:23.734866  285663 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:58:23.734933  285663 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:58:23.737900  285663 out.go:252]   - Generating certificates and keys ...
	I1123 08:58:23.737987  285663 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:58:23.738060  285663 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:58:23.738135  285663 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:58:23.738196  285663 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:58:23.738260  285663 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:58:23.738313  285663 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:58:23.738371  285663 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:58:23.738495  285663 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-984173 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 08:58:23.738551  285663 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:58:23.738669  285663 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-984173 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 08:58:23.738737  285663 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:58:23.738803  285663 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:58:23.738850  285663 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:58:23.738909  285663 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:58:23.738963  285663 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:58:23.739028  285663 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:58:23.739087  285663 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:58:23.739152  285663 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:58:23.739210  285663 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:58:23.739295  285663 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:58:23.739364  285663 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:58:23.744095  285663 out.go:252]   - Booting up control plane ...
	I1123 08:58:23.744234  285663 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:58:23.744346  285663 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:58:23.744424  285663 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:58:23.744565  285663 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:58:23.744673  285663 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:58:23.744784  285663 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:58:23.744892  285663 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:58:23.744961  285663 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:58:23.745130  285663 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:58:23.745250  285663 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:58:23.745316  285663 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.003522866s
	I1123 08:58:23.745477  285663 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:58:23.745568  285663 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1123 08:58:23.745683  285663 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:58:23.745805  285663 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:58:23.745897  285663 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.869430516s
	I1123 08:58:23.745974  285663 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.538959716s
	I1123 08:58:23.746047  285663 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501807408s
	I1123 08:58:23.746216  285663 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:58:23.746382  285663 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:58:23.746445  285663 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:58:23.746667  285663 kubeadm.go:319] [mark-control-plane] Marking the node addons-984173 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:58:23.746734  285663 kubeadm.go:319] [bootstrap-token] Using token: tb3p7g.n9zph3ueg2zzg57t
	I1123 08:58:23.749849  285663 out.go:252]   - Configuring RBAC rules ...
	I1123 08:58:23.749987  285663 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:58:23.750085  285663 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:58:23.750272  285663 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:58:23.750426  285663 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:58:23.750552  285663 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:58:23.750639  285663 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:58:23.750754  285663 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:58:23.750797  285663 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:58:23.750842  285663 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:58:23.750846  285663 kubeadm.go:319] 
	I1123 08:58:23.750905  285663 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:58:23.750909  285663 kubeadm.go:319] 
	I1123 08:58:23.750992  285663 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:58:23.750997  285663 kubeadm.go:319] 
	I1123 08:58:23.751022  285663 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:58:23.751080  285663 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:58:23.751131  285663 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:58:23.751134  285663 kubeadm.go:319] 
	I1123 08:58:23.751188  285663 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:58:23.751192  285663 kubeadm.go:319] 
	I1123 08:58:23.751239  285663 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:58:23.751242  285663 kubeadm.go:319] 
	I1123 08:58:23.751294  285663 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:58:23.751369  285663 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:58:23.751437  285663 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:58:23.751440  285663 kubeadm.go:319] 
	I1123 08:58:23.751524  285663 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:58:23.751601  285663 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:58:23.751604  285663 kubeadm.go:319] 
	I1123 08:58:23.751690  285663 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tb3p7g.n9zph3ueg2zzg57t \
	I1123 08:58:23.751793  285663 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e \
	I1123 08:58:23.751813  285663 kubeadm.go:319] 	--control-plane 
	I1123 08:58:23.751817  285663 kubeadm.go:319] 
	I1123 08:58:23.751901  285663 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:58:23.751904  285663 kubeadm.go:319] 
	I1123 08:58:23.751986  285663 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tb3p7g.n9zph3ueg2zzg57t \
	I1123 08:58:23.752102  285663 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e 
	I1123 08:58:23.752112  285663 cni.go:84] Creating CNI manager for ""
	I1123 08:58:23.752119  285663 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:58:23.755219  285663 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:58:23.758178  285663 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:58:23.769205  285663 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:58:23.769225  285663 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:58:23.781887  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:58:24.073519  285663 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:58:24.073643  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:24.073734  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-984173 minikube.k8s.io/updated_at=2025_11_23T08_58_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=addons-984173 minikube.k8s.io/primary=true
	I1123 08:58:24.297852  285663 ops.go:34] apiserver oom_adj: -16
	I1123 08:58:24.297964  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:24.798115  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:25.298663  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:25.798673  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:26.298095  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:26.798734  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:27.298615  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:27.798096  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:28.298376  285663 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:58:28.414783  285663 kubeadm.go:1114] duration metric: took 4.34117504s to wait for elevateKubeSystemPrivileges
	I1123 08:58:28.414811  285663 kubeadm.go:403] duration metric: took 23.303700743s to StartCluster
	I1123 08:58:28.414828  285663 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:28.415597  285663 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 08:58:28.415991  285663 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:58:28.416189  285663 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:58:28.416357  285663 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:58:28.416619  285663 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:28.416655  285663 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 08:58:28.416741  285663 addons.go:70] Setting yakd=true in profile "addons-984173"
	I1123 08:58:28.416755  285663 addons.go:239] Setting addon yakd=true in "addons-984173"
	I1123 08:58:28.416776  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.417265  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.417889  285663 addons.go:70] Setting inspektor-gadget=true in profile "addons-984173"
	I1123 08:58:28.417917  285663 addons.go:239] Setting addon inspektor-gadget=true in "addons-984173"
	I1123 08:58:28.417942  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.418146  285663 addons.go:70] Setting metrics-server=true in profile "addons-984173"
	I1123 08:58:28.418163  285663 addons.go:239] Setting addon metrics-server=true in "addons-984173"
	I1123 08:58:28.418184  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.418385  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.418600  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.422285  285663 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-984173"
	I1123 08:58:28.422320  285663 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-984173"
	I1123 08:58:28.422352  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.422811  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423260  285663 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-984173"
	I1123 08:58:28.424590  285663 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-984173"
	I1123 08:58:28.424750  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.426833  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423417  285663 addons.go:70] Setting cloud-spanner=true in profile "addons-984173"
	I1123 08:58:28.430054  285663 addons.go:239] Setting addon cloud-spanner=true in "addons-984173"
	I1123 08:58:28.430104  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.430538  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423428  285663 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-984173"
	I1123 08:58:28.446564  285663 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-984173"
	I1123 08:58:28.446599  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.423438  285663 addons.go:70] Setting default-storageclass=true in profile "addons-984173"
	I1123 08:58:28.446723  285663 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-984173"
	I1123 08:58:28.446998  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.454558  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423444  285663 addons.go:70] Setting gcp-auth=true in profile "addons-984173"
	I1123 08:58:28.465315  285663 mustload.go:66] Loading cluster: addons-984173
	I1123 08:58:28.465596  285663 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:28.465866  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423451  285663 addons.go:70] Setting ingress=true in profile "addons-984173"
	I1123 08:58:28.490838  285663 addons.go:239] Setting addon ingress=true in "addons-984173"
	I1123 08:58:28.490886  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.491351  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423457  285663 addons.go:70] Setting ingress-dns=true in profile "addons-984173"
	I1123 08:58:28.502236  285663 addons.go:239] Setting addon ingress-dns=true in "addons-984173"
	I1123 08:58:28.502293  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.502762  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.423502  285663 out.go:179] * Verifying Kubernetes components...
	I1123 08:58:28.424461  285663 addons.go:70] Setting volcano=true in profile "addons-984173"
	I1123 08:58:28.571914  285663 addons.go:239] Setting addon volcano=true in "addons-984173"
	I1123 08:58:28.571966  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.572447  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.579454  285663 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:58:28.424472  285663 addons.go:70] Setting registry=true in profile "addons-984173"
	I1123 08:58:28.589296  285663 addons.go:239] Setting addon registry=true in "addons-984173"
	I1123 08:58:28.589340  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.424497  285663 addons.go:70] Setting registry-creds=true in profile "addons-984173"
	I1123 08:58:28.589611  285663 addons.go:239] Setting addon registry-creds=true in "addons-984173"
	I1123 08:58:28.589634  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.590084  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.603230  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.424506  285663 addons.go:70] Setting storage-provisioner=true in profile "addons-984173"
	I1123 08:58:28.603627  285663 addons.go:239] Setting addon storage-provisioner=true in "addons-984173"
	I1123 08:58:28.603662  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.604070  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.621895  285663 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 08:58:28.424511  285663 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-984173"
	I1123 08:58:28.624195  285663 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-984173"
	I1123 08:58:28.624524  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.627409  285663 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 08:58:28.627457  285663 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 08:58:28.627523  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.641830  285663 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 08:58:28.645129  285663 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 08:58:28.645153  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 08:58:28.645217  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.424531  285663 addons.go:70] Setting volumesnapshots=true in profile "addons-984173"
	I1123 08:58:28.649010  285663 addons.go:239] Setting addon volumesnapshots=true in "addons-984173"
	I1123 08:58:28.649049  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.649528  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.663147  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 08:58:28.663211  285663 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 08:58:28.666625  285663 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 08:58:28.666654  285663 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 08:58:28.666736  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.672327  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 08:58:28.675281  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 08:58:28.679870  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 08:58:28.684589  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 08:58:28.687620  285663 addons.go:239] Setting addon default-storageclass=true in "addons-984173"
	I1123 08:58:28.687664  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.688129  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.693453  285663 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1123 08:58:28.693581  285663 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 08:58:28.693640  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.736496  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 08:58:28.741689  285663 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 08:58:28.741710  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 08:58:28.741776  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.751936  285663 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 08:58:28.752016  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 08:58:28.752111  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.772508  285663 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 08:58:28.780611  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 08:58:28.717367  285663 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 08:58:28.816367  285663 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 08:58:28.823199  285663 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 08:58:28.823280  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 08:58:28.823395  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.830531  285663 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1123 08:58:28.830747  285663 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 08:58:28.830760  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 08:58:28.830904  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	W1123 08:58:28.839217  285663 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 08:58:28.855179  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 08:58:28.855402  285663 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:58:28.855529  285663 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 08:58:28.855543  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 08:58:28.855606  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.858375  285663 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:58:28.866863  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 08:58:28.866960  285663 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 08:58:28.867065  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.889229  285663 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 08:58:28.892972  285663 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 08:58:28.894371  285663 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:58:28.899613  285663 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 08:58:28.899637  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 08:58:28.899721  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.903125  285663 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 08:58:28.903194  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 08:58:28.903280  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.938187  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:28.939232  285663 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-984173"
	I1123 08:58:28.939270  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:28.939705  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:28.947211  285663 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:58:28.947232  285663 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:58:28.947289  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.949907  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:28.953917  285663 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:58:28.956953  285663 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:58:28.956973  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:58:28.957034  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:28.978097  285663 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 08:58:28.982220  285663 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 08:58:28.982249  285663 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 08:58:28.982321  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:29.034009  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.034866  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.035782  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.038695  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.051871  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.072797  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.085464  285663 out.go:179]   - Using image docker.io/busybox:stable
	I1123 08:58:29.089696  285663 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 08:58:29.092603  285663 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 08:58:29.092625  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 08:58:29.092694  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:29.096231  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.105894  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.121307  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.146036  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.166544  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.168119  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	W1123 08:58:29.178365  285663 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 08:58:29.178405  285663 retry.go:31] will retry after 172.408471ms: ssh: handshake failed: EOF
	I1123 08:58:29.183804  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:29.192462  285663 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1123 08:58:29.193934  285663 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 08:58:29.193957  285663 retry.go:31] will retry after 312.038601ms: ssh: handshake failed: EOF
	I1123 08:58:29.714677  285663 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 08:58:29.714778  285663 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 08:58:29.747151  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 08:58:29.767970  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 08:58:29.775979  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:58:29.780947  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 08:58:29.806715  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 08:58:29.812905  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 08:58:29.874213  285663 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 08:58:29.874325  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 08:58:29.924368  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 08:58:29.946772  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 08:58:29.946875  285663 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 08:58:29.971357  285663 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 08:58:29.971423  285663 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 08:58:29.987202  285663 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 08:58:29.987295  285663 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 08:58:30.002470  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 08:58:30.013198  285663 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 08:58:30.013292  285663 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 08:58:30.037857  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 08:58:30.120895  285663 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 08:58:30.120980  285663 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 08:58:30.165123  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:58:30.201017  285663 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 08:58:30.201046  285663 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 08:58:30.203525  285663 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 08:58:30.203550  285663 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 08:58:30.206144  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 08:58:30.206169  285663 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 08:58:30.236977  285663 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 08:58:30.237003  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 08:58:30.342026  285663 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:58:30.342113  285663 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 08:58:30.394444  285663 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 08:58:30.394518  285663 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 08:58:30.425931  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 08:58:30.426004  285663 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 08:58:30.429342  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 08:58:30.429440  285663 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 08:58:30.445521  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:58:30.454811  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 08:58:30.642967  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 08:58:30.643046  285663 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 08:58:30.658733  285663 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:58:30.658803  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 08:58:30.688610  285663 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 08:58:30.688638  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 08:58:30.719483  285663 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.860999878s)
	I1123 08:58:30.719517  285663 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1123 08:58:30.720471  285663 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.527982474s)
	I1123 08:58:30.721071  285663 node_ready.go:35] waiting up to 6m0s for node "addons-984173" to be "Ready" ...
	I1123 08:58:30.909053  285663 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 08:58:30.909136  285663 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 08:58:30.910445  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:58:30.957985  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 08:58:31.030873  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.283595348s)
	I1123 08:58:31.030979  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.262913564s)
	I1123 08:58:31.037982  285663 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 08:58:31.038065  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 08:58:31.229554  285663 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 08:58:31.229578  285663 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 08:58:31.236210  285663 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-984173" context rescaled to 1 replicas
	I1123 08:58:31.250799  285663 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 08:58:31.250824  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 08:58:31.266744  285663 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 08:58:31.266818  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 08:58:31.281985  285663 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 08:58:31.282069  285663 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 08:58:31.297140  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1123 08:58:32.758043  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:33.531245  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.750187558s)
	I1123 08:58:33.531362  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.755303257s)
	I1123 08:58:33.722415  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.915668046s)
	I1123 08:58:33.722477  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.909546905s)
	I1123 08:58:34.772399  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.769829607s)
	I1123 08:58:34.772464  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.734529056s)
	I1123 08:58:34.772659  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.607470817s)
	I1123 08:58:34.772774  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.327178983s)
	I1123 08:58:34.772785  285663 addons.go:495] Verifying addon metrics-server=true in "addons-984173"
	I1123 08:58:34.772813  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.317929906s)
	I1123 08:58:34.772822  285663 addons.go:495] Verifying addon registry=true in "addons-984173"
	I1123 08:58:34.773081  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.862572083s)
	W1123 08:58:34.773108  285663 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 08:58:34.773125  285663 retry.go:31] will retry after 169.668057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 08:58:34.773167  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.815107856s)
	I1123 08:58:34.773361  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.848907182s)
	I1123 08:58:34.773397  285663 addons.go:495] Verifying addon ingress=true in "addons-984173"
	I1123 08:58:34.776854  285663 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-984173 service yakd-dashboard -n yakd-dashboard
	
	I1123 08:58:34.776967  285663 out.go:179] * Verifying registry addon...
	I1123 08:58:34.777013  285663 out.go:179] * Verifying ingress addon...
	I1123 08:58:34.781248  285663 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 08:58:34.782278  285663 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W1123 08:58:34.790756  285663 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1123 08:58:34.791585  285663 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 08:58:34.791628  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:34.792141  285663 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 08:58:34.792182  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:34.943916  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:58:35.046997  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.749739942s)
	I1123 08:58:35.047080  285663 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-984173"
	I1123 08:58:35.050126  285663 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 08:58:35.053950  285663 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 08:58:35.062457  285663 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 08:58:35.062530  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 08:58:35.224102  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:35.286433  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:35.286818  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:35.558974  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:35.785266  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:35.786009  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:36.057650  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:36.285399  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:36.286353  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:36.395351  285663 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 08:58:36.395452  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:36.412175  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:36.534846  285663 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 08:58:36.547757  285663 addons.go:239] Setting addon gcp-auth=true in "addons-984173"
	I1123 08:58:36.547857  285663 host.go:66] Checking if "addons-984173" exists ...
	I1123 08:58:36.548356  285663 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 08:58:36.557514  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:36.568314  285663 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 08:58:36.568365  285663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 08:58:36.584754  285663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 08:58:36.785629  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:36.785674  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:37.058271  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:37.285088  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:37.285742  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:37.557703  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:37.642863  285663 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.698902312s)
	I1123 08:58:37.643034  285663 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.074694919s)
	I1123 08:58:37.646322  285663 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:58:37.649210  285663 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 08:58:37.652037  285663 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 08:58:37.652061  285663 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 08:58:37.665915  285663 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 08:58:37.665981  285663 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 08:58:37.679369  285663 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 08:58:37.679397  285663 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 08:58:37.692420  285663 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1123 08:58:37.724607  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:37.787454  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:37.787890  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:38.065161  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:38.196221  285663 addons.go:495] Verifying addon gcp-auth=true in "addons-984173"
	I1123 08:58:38.199309  285663 out.go:179] * Verifying gcp-auth addon...
	I1123 08:58:38.203025  285663 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 08:58:38.218065  285663 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 08:58:38.218153  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:38.285749  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:38.286096  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:38.556806  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:38.706104  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:38.785928  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:38.786270  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:39.057604  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:39.206401  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:39.284802  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:39.285806  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:39.556951  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:39.706633  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:39.786073  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:39.786690  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:40.057109  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:40.206991  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:40.224718  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:40.285878  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:40.286194  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:40.557126  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:40.706094  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:40.785969  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:40.787506  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:41.057724  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:41.206920  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:41.285889  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:41.286061  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:41.557329  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:41.707627  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:41.785681  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:41.785862  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:42.059175  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:42.206781  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:42.224914  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:42.285901  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:42.286315  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:42.557864  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:42.706738  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:42.786170  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:42.786564  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:43.058200  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:43.206581  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:43.285733  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:43.285968  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:43.557758  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:43.707681  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:43.785793  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:43.785830  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:44.058410  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:44.206349  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:44.284973  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:44.286065  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:44.556849  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:44.707277  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:44.723970  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:44.785070  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:44.786356  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:45.058651  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:45.208153  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:45.286051  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:45.287302  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:45.557900  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:45.706914  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:45.785616  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:45.786069  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:46.057300  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:46.206428  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:46.285019  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:46.286284  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:46.557495  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:46.706335  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:46.724264  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:46.785151  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:46.786667  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:47.057330  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:47.205900  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:47.286011  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:47.286279  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:47.556803  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:47.706773  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:47.785810  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:47.785986  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:48.057524  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:48.206525  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:48.284772  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:48.285904  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:48.557126  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:48.706107  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:48.785718  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:48.786242  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:49.057747  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:49.206595  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:49.224323  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:49.285519  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:49.285816  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:49.556800  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:49.706999  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:49.784870  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:49.787266  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:50.057592  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:50.206471  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:50.285431  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:50.285494  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:50.557181  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:50.707483  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:50.785269  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:50.786575  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:51.058283  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:51.207461  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:51.284981  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:51.285847  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:51.557956  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:51.706842  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:51.724338  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:51.785670  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:51.785865  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:52.059450  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:52.206062  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:52.284702  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:52.286358  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:52.557164  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:52.706215  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:52.785316  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:52.786581  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:53.059085  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:53.206309  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:53.284672  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:53.286133  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:53.557342  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:53.710583  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:53.785339  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:53.785946  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:54.057069  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:54.206989  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:54.224942  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:54.284735  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:54.286197  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:54.557214  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:54.706071  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:54.784660  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:54.785347  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:55.058133  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:55.206171  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:55.284945  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:55.286735  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:55.556857  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:55.706967  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:55.785795  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:55.785950  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:56.057293  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:56.206591  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:56.284834  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:56.286378  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:56.557478  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:56.706538  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:56.724289  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:56.785249  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:56.786055  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:57.057131  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:57.206078  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:57.299620  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:57.299698  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:57.557008  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:57.707256  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:57.785741  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:57.786122  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:58.057712  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:58.205794  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:58.285718  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:58.285908  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:58.557935  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:58.706150  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:58:58.724832  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:58:58.785796  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:58.785969  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:59.056950  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:59.207050  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:59.286368  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:58:59.286824  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:59.556815  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:58:59.706649  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:58:59.785200  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:58:59.786067  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:00.068940  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:00.210037  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:00.302811  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:00.302990  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:00.556951  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:00.705466  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:00.784594  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:00.785629  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:01.058065  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:01.206206  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:59:01.224220  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:59:01.284841  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:01.285873  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:01.558381  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:01.706147  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:01.785912  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:01.786210  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:02.059875  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:02.206842  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:02.285692  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:02.285932  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:02.556760  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:02.707078  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:02.785295  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:02.786211  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:03.057483  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:03.207412  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:03.285061  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:03.286175  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:03.557033  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:03.705891  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:59:03.724445  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:59:03.786276  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:03.786503  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:04.057816  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:04.206785  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:04.285786  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:04.286202  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:04.557301  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:04.706309  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:04.784820  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:04.786178  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:05.057326  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:05.206151  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:05.284976  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:05.285886  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:05.557265  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:05.706431  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:05.785634  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:05.785746  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:06.057108  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:06.206250  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:59:06.223992  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:59:06.284544  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:06.285077  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:06.556895  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:06.706675  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:06.785568  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:06.785763  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:07.056979  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:07.205974  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:07.285732  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:07.285841  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:07.557276  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:07.706497  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:07.785709  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:07.786265  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:08.057660  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:08.206775  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:08.285917  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:08.286191  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:08.557110  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:08.706196  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1123 08:59:08.724234  285663 node_ready.go:57] node "addons-984173" has "Ready":"False" status (will retry)
	I1123 08:59:08.784994  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:08.786291  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:09.057143  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:09.205752  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:09.285321  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:09.286947  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:09.557186  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:09.705895  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:09.728771  285663 node_ready.go:49] node "addons-984173" is "Ready"
	I1123 08:59:09.728804  285663 node_ready.go:38] duration metric: took 39.007711408s for node "addons-984173" to be "Ready" ...
	I1123 08:59:09.728819  285663 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:59:09.728900  285663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:59:09.747005  285663 api_server.go:72] duration metric: took 41.330787976s to wait for apiserver process to appear ...
	I1123 08:59:09.747033  285663 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:59:09.747052  285663 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 08:59:09.769899  285663 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 08:59:09.771634  285663 api_server.go:141] control plane version: v1.34.1
	I1123 08:59:09.771670  285663 api_server.go:131] duration metric: took 24.630281ms to wait for apiserver health ...
	I1123 08:59:09.771680  285663 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:59:09.834685  285663 system_pods.go:59] 18 kube-system pods found
	I1123 08:59:09.834723  285663 system_pods.go:61] "coredns-66bc5c9577-d2nfj" [bfd6365a-09c7-4c05-879a-e2eb73527961] Pending
	I1123 08:59:09.834729  285663 system_pods.go:61] "csi-hostpath-attacher-0" [0fc9010e-94b4-4c43-b918-6beb79362f03] Pending
	I1123 08:59:09.834758  285663 system_pods.go:61] "csi-hostpath-resizer-0" [b6ebc0dc-1eaf-429b-8a6a-f632e2fc6e17] Pending
	I1123 08:59:09.834772  285663 system_pods.go:61] "csi-hostpathplugin-2kj78" [61ab5b5b-d56e-493a-99cb-522dc9af7cfe] Pending
	I1123 08:59:09.834776  285663 system_pods.go:61] "etcd-addons-984173" [fd3e24a8-6c25-4b06-91bb-546e6bbf3282] Running
	I1123 08:59:09.834781  285663 system_pods.go:61] "kindnet-694tf" [c26ca19a-a40f-44d6-b753-479597734109] Running
	I1123 08:59:09.834785  285663 system_pods.go:61] "kube-apiserver-addons-984173" [24699bd1-e12c-409d-a518-2986ee6304e4] Running
	I1123 08:59:09.834788  285663 system_pods.go:61] "kube-controller-manager-addons-984173" [db313608-3959-4244-9fdf-e032e049f063] Running
	I1123 08:59:09.834803  285663 system_pods.go:61] "kube-ingress-dns-minikube" [47acf22f-c161-46e7-8b97-692610b92f19] Pending
	I1123 08:59:09.834807  285663 system_pods.go:61] "kube-proxy-wfr86" [161587c4-704b-4433-bd0d-df2bbce113bf] Running
	I1123 08:59:09.834826  285663 system_pods.go:61] "kube-scheduler-addons-984173" [b4634404-b4ec-4f19-a5a0-37e6063ffe91] Running
	I1123 08:59:09.834836  285663 system_pods.go:61] "metrics-server-85b7d694d7-q7k2v" [10e46f11-8afd-4338-abf6-90235104b38c] Pending
	I1123 08:59:09.834840  285663 system_pods.go:61] "registry-6b586f9694-r7jl6" [30719118-851e-4542-a4f8-c89f68f6bd04] Pending
	I1123 08:59:09.834843  285663 system_pods.go:61] "registry-creds-764b6fb674-lxww8" [5a528301-690c-4034-989a-9dd8b4c6b876] Pending
	I1123 08:59:09.834861  285663 system_pods.go:61] "registry-proxy-xt9vl" [0e2ac94e-9a8b-4407-8901-cdf4a4fdfc8a] Pending
	I1123 08:59:09.834873  285663 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gbxvb" [b0c3071e-fd3e-4174-a01b-9138498a07c1] Pending
	I1123 08:59:09.834877  285663 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qrk99" [235dd6f3-7b70-4cd1-b640-d2d877b3a77c] Pending
	I1123 08:59:09.834881  285663 system_pods.go:61] "storage-provisioner" [fbafe21b-5d28-4c82-a702-3ac2a06c124d] Pending
	I1123 08:59:09.834901  285663 system_pods.go:74] duration metric: took 63.202611ms to wait for pod list to return data ...
	I1123 08:59:09.834916  285663 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:59:09.852209  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:09.876672  285663 default_sa.go:45] found service account: "default"
	I1123 08:59:09.876710  285663 default_sa.go:55] duration metric: took 41.786618ms for default service account to be created ...
	I1123 08:59:09.876721  285663 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:59:09.896390  285663 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 08:59:09.896416  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:09.905743  285663 system_pods.go:86] 19 kube-system pods found
	I1123 08:59:09.905774  285663 system_pods.go:89] "coredns-66bc5c9577-d2nfj" [bfd6365a-09c7-4c05-879a-e2eb73527961] Pending
	I1123 08:59:09.905780  285663 system_pods.go:89] "csi-hostpath-attacher-0" [0fc9010e-94b4-4c43-b918-6beb79362f03] Pending
	I1123 08:59:09.905784  285663 system_pods.go:89] "csi-hostpath-resizer-0" [b6ebc0dc-1eaf-429b-8a6a-f632e2fc6e17] Pending
	I1123 08:59:09.905788  285663 system_pods.go:89] "csi-hostpathplugin-2kj78" [61ab5b5b-d56e-493a-99cb-522dc9af7cfe] Pending
	I1123 08:59:09.905791  285663 system_pods.go:89] "etcd-addons-984173" [fd3e24a8-6c25-4b06-91bb-546e6bbf3282] Running
	I1123 08:59:09.905821  285663 system_pods.go:89] "kindnet-694tf" [c26ca19a-a40f-44d6-b753-479597734109] Running
	I1123 08:59:09.905832  285663 system_pods.go:89] "kube-apiserver-addons-984173" [24699bd1-e12c-409d-a518-2986ee6304e4] Running
	I1123 08:59:09.905837  285663 system_pods.go:89] "kube-controller-manager-addons-984173" [db313608-3959-4244-9fdf-e032e049f063] Running
	I1123 08:59:09.905842  285663 system_pods.go:89] "kube-ingress-dns-minikube" [47acf22f-c161-46e7-8b97-692610b92f19] Pending
	I1123 08:59:09.905852  285663 system_pods.go:89] "kube-proxy-wfr86" [161587c4-704b-4433-bd0d-df2bbce113bf] Running
	I1123 08:59:09.905858  285663 system_pods.go:89] "kube-scheduler-addons-984173" [b4634404-b4ec-4f19-a5a0-37e6063ffe91] Running
	I1123 08:59:09.905867  285663 system_pods.go:89] "metrics-server-85b7d694d7-q7k2v" [10e46f11-8afd-4338-abf6-90235104b38c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:59:09.905873  285663 system_pods.go:89] "nvidia-device-plugin-daemonset-brqdp" [3eb0354a-72da-4330-8b65-cdb7395b7a35] Pending
	I1123 08:59:09.905906  285663 system_pods.go:89] "registry-6b586f9694-r7jl6" [30719118-851e-4542-a4f8-c89f68f6bd04] Pending
	I1123 08:59:09.905918  285663 system_pods.go:89] "registry-creds-764b6fb674-lxww8" [5a528301-690c-4034-989a-9dd8b4c6b876] Pending
	I1123 08:59:09.905922  285663 system_pods.go:89] "registry-proxy-xt9vl" [0e2ac94e-9a8b-4407-8901-cdf4a4fdfc8a] Pending
	I1123 08:59:09.905926  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gbxvb" [b0c3071e-fd3e-4174-a01b-9138498a07c1] Pending
	I1123 08:59:09.905936  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qrk99" [235dd6f3-7b70-4cd1-b640-d2d877b3a77c] Pending
	I1123 08:59:09.905940  285663 system_pods.go:89] "storage-provisioner" [fbafe21b-5d28-4c82-a702-3ac2a06c124d] Pending
	I1123 08:59:09.905955  285663 retry.go:31] will retry after 245.945242ms: missing components: kube-dns
	I1123 08:59:10.116359  285663 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 08:59:10.116391  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:10.179100  285663 system_pods.go:86] 19 kube-system pods found
	I1123 08:59:10.179134  285663 system_pods.go:89] "coredns-66bc5c9577-d2nfj" [bfd6365a-09c7-4c05-879a-e2eb73527961] Pending
	I1123 08:59:10.179149  285663 system_pods.go:89] "csi-hostpath-attacher-0" [0fc9010e-94b4-4c43-b918-6beb79362f03] Pending
	I1123 08:59:10.179177  285663 system_pods.go:89] "csi-hostpath-resizer-0" [b6ebc0dc-1eaf-429b-8a6a-f632e2fc6e17] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:59:10.179190  285663 system_pods.go:89] "csi-hostpathplugin-2kj78" [61ab5b5b-d56e-493a-99cb-522dc9af7cfe] Pending
	I1123 08:59:10.179196  285663 system_pods.go:89] "etcd-addons-984173" [fd3e24a8-6c25-4b06-91bb-546e6bbf3282] Running
	I1123 08:59:10.179201  285663 system_pods.go:89] "kindnet-694tf" [c26ca19a-a40f-44d6-b753-479597734109] Running
	I1123 08:59:10.179224  285663 system_pods.go:89] "kube-apiserver-addons-984173" [24699bd1-e12c-409d-a518-2986ee6304e4] Running
	I1123 08:59:10.179236  285663 system_pods.go:89] "kube-controller-manager-addons-984173" [db313608-3959-4244-9fdf-e032e049f063] Running
	I1123 08:59:10.179243  285663 system_pods.go:89] "kube-ingress-dns-minikube" [47acf22f-c161-46e7-8b97-692610b92f19] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:59:10.179254  285663 system_pods.go:89] "kube-proxy-wfr86" [161587c4-704b-4433-bd0d-df2bbce113bf] Running
	I1123 08:59:10.179259  285663 system_pods.go:89] "kube-scheduler-addons-984173" [b4634404-b4ec-4f19-a5a0-37e6063ffe91] Running
	I1123 08:59:10.179265  285663 system_pods.go:89] "metrics-server-85b7d694d7-q7k2v" [10e46f11-8afd-4338-abf6-90235104b38c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:59:10.179274  285663 system_pods.go:89] "nvidia-device-plugin-daemonset-brqdp" [3eb0354a-72da-4330-8b65-cdb7395b7a35] Pending
	I1123 08:59:10.179280  285663 system_pods.go:89] "registry-6b586f9694-r7jl6" [30719118-851e-4542-a4f8-c89f68f6bd04] Pending
	I1123 08:59:10.179285  285663 system_pods.go:89] "registry-creds-764b6fb674-lxww8" [5a528301-690c-4034-989a-9dd8b4c6b876] Pending
	I1123 08:59:10.179304  285663 system_pods.go:89] "registry-proxy-xt9vl" [0e2ac94e-9a8b-4407-8901-cdf4a4fdfc8a] Pending
	I1123 08:59:10.179319  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gbxvb" [b0c3071e-fd3e-4174-a01b-9138498a07c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:10.179336  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qrk99" [235dd6f3-7b70-4cd1-b640-d2d877b3a77c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:10.179350  285663 system_pods.go:89] "storage-provisioner" [fbafe21b-5d28-4c82-a702-3ac2a06c124d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:59:10.179380  285663 retry.go:31] will retry after 336.66339ms: missing components: kube-dns
	I1123 08:59:10.268006  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:10.291167  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:10.292864  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:10.523566  285663 system_pods.go:86] 19 kube-system pods found
	I1123 08:59:10.523604  285663 system_pods.go:89] "coredns-66bc5c9577-d2nfj" [bfd6365a-09c7-4c05-879a-e2eb73527961] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:59:10.523637  285663 system_pods.go:89] "csi-hostpath-attacher-0" [0fc9010e-94b4-4c43-b918-6beb79362f03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:59:10.523653  285663 system_pods.go:89] "csi-hostpath-resizer-0" [b6ebc0dc-1eaf-429b-8a6a-f632e2fc6e17] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:59:10.523661  285663 system_pods.go:89] "csi-hostpathplugin-2kj78" [61ab5b5b-d56e-493a-99cb-522dc9af7cfe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:59:10.523666  285663 system_pods.go:89] "etcd-addons-984173" [fd3e24a8-6c25-4b06-91bb-546e6bbf3282] Running
	I1123 08:59:10.523671  285663 system_pods.go:89] "kindnet-694tf" [c26ca19a-a40f-44d6-b753-479597734109] Running
	I1123 08:59:10.523678  285663 system_pods.go:89] "kube-apiserver-addons-984173" [24699bd1-e12c-409d-a518-2986ee6304e4] Running
	I1123 08:59:10.523700  285663 system_pods.go:89] "kube-controller-manager-addons-984173" [db313608-3959-4244-9fdf-e032e049f063] Running
	I1123 08:59:10.523714  285663 system_pods.go:89] "kube-ingress-dns-minikube" [47acf22f-c161-46e7-8b97-692610b92f19] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:59:10.523719  285663 system_pods.go:89] "kube-proxy-wfr86" [161587c4-704b-4433-bd0d-df2bbce113bf] Running
	I1123 08:59:10.523738  285663 system_pods.go:89] "kube-scheduler-addons-984173" [b4634404-b4ec-4f19-a5a0-37e6063ffe91] Running
	I1123 08:59:10.523751  285663 system_pods.go:89] "metrics-server-85b7d694d7-q7k2v" [10e46f11-8afd-4338-abf6-90235104b38c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:59:10.523759  285663 system_pods.go:89] "nvidia-device-plugin-daemonset-brqdp" [3eb0354a-72da-4330-8b65-cdb7395b7a35] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:59:10.523780  285663 system_pods.go:89] "registry-6b586f9694-r7jl6" [30719118-851e-4542-a4f8-c89f68f6bd04] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:59:10.523793  285663 system_pods.go:89] "registry-creds-764b6fb674-lxww8" [5a528301-690c-4034-989a-9dd8b4c6b876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:59:10.523801  285663 system_pods.go:89] "registry-proxy-xt9vl" [0e2ac94e-9a8b-4407-8901-cdf4a4fdfc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:59:10.523824  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gbxvb" [b0c3071e-fd3e-4174-a01b-9138498a07c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:10.523832  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qrk99" [235dd6f3-7b70-4cd1-b640-d2d877b3a77c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:10.523858  285663 system_pods.go:89] "storage-provisioner" [fbafe21b-5d28-4c82-a702-3ac2a06c124d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:59:10.523881  285663 retry.go:31] will retry after 345.682297ms: missing components: kube-dns
	I1123 08:59:10.622541  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:10.721921  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:10.823148  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:10.823590  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:10.923728  285663 system_pods.go:86] 19 kube-system pods found
	I1123 08:59:10.923766  285663 system_pods.go:89] "coredns-66bc5c9577-d2nfj" [bfd6365a-09c7-4c05-879a-e2eb73527961] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:59:10.923798  285663 system_pods.go:89] "csi-hostpath-attacher-0" [0fc9010e-94b4-4c43-b918-6beb79362f03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:59:10.923813  285663 system_pods.go:89] "csi-hostpath-resizer-0" [b6ebc0dc-1eaf-429b-8a6a-f632e2fc6e17] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:59:10.923820  285663 system_pods.go:89] "csi-hostpathplugin-2kj78" [61ab5b5b-d56e-493a-99cb-522dc9af7cfe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:59:10.923825  285663 system_pods.go:89] "etcd-addons-984173" [fd3e24a8-6c25-4b06-91bb-546e6bbf3282] Running
	I1123 08:59:10.923838  285663 system_pods.go:89] "kindnet-694tf" [c26ca19a-a40f-44d6-b753-479597734109] Running
	I1123 08:59:10.923843  285663 system_pods.go:89] "kube-apiserver-addons-984173" [24699bd1-e12c-409d-a518-2986ee6304e4] Running
	I1123 08:59:10.923847  285663 system_pods.go:89] "kube-controller-manager-addons-984173" [db313608-3959-4244-9fdf-e032e049f063] Running
	I1123 08:59:10.923872  285663 system_pods.go:89] "kube-ingress-dns-minikube" [47acf22f-c161-46e7-8b97-692610b92f19] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:59:10.923882  285663 system_pods.go:89] "kube-proxy-wfr86" [161587c4-704b-4433-bd0d-df2bbce113bf] Running
	I1123 08:59:10.923887  285663 system_pods.go:89] "kube-scheduler-addons-984173" [b4634404-b4ec-4f19-a5a0-37e6063ffe91] Running
	I1123 08:59:10.923894  285663 system_pods.go:89] "metrics-server-85b7d694d7-q7k2v" [10e46f11-8afd-4338-abf6-90235104b38c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:59:10.923905  285663 system_pods.go:89] "nvidia-device-plugin-daemonset-brqdp" [3eb0354a-72da-4330-8b65-cdb7395b7a35] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:59:10.923912  285663 system_pods.go:89] "registry-6b586f9694-r7jl6" [30719118-851e-4542-a4f8-c89f68f6bd04] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:59:10.923923  285663 system_pods.go:89] "registry-creds-764b6fb674-lxww8" [5a528301-690c-4034-989a-9dd8b4c6b876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:59:10.923946  285663 system_pods.go:89] "registry-proxy-xt9vl" [0e2ac94e-9a8b-4407-8901-cdf4a4fdfc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:59:10.923961  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gbxvb" [b0c3071e-fd3e-4174-a01b-9138498a07c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:10.923981  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qrk99" [235dd6f3-7b70-4cd1-b640-d2d877b3a77c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:10.923996  285663 system_pods.go:89] "storage-provisioner" [fbafe21b-5d28-4c82-a702-3ac2a06c124d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:59:10.924028  285663 retry.go:31] will retry after 601.407037ms: missing components: kube-dns
	I1123 08:59:11.058157  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:11.206268  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:11.286819  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:11.286868  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:11.530156  285663 system_pods.go:86] 19 kube-system pods found
	I1123 08:59:11.530199  285663 system_pods.go:89] "coredns-66bc5c9577-d2nfj" [bfd6365a-09c7-4c05-879a-e2eb73527961] Running
	I1123 08:59:11.530211  285663 system_pods.go:89] "csi-hostpath-attacher-0" [0fc9010e-94b4-4c43-b918-6beb79362f03] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:59:11.530220  285663 system_pods.go:89] "csi-hostpath-resizer-0" [b6ebc0dc-1eaf-429b-8a6a-f632e2fc6e17] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:59:11.530229  285663 system_pods.go:89] "csi-hostpathplugin-2kj78" [61ab5b5b-d56e-493a-99cb-522dc9af7cfe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:59:11.530234  285663 system_pods.go:89] "etcd-addons-984173" [fd3e24a8-6c25-4b06-91bb-546e6bbf3282] Running
	I1123 08:59:11.530239  285663 system_pods.go:89] "kindnet-694tf" [c26ca19a-a40f-44d6-b753-479597734109] Running
	I1123 08:59:11.530245  285663 system_pods.go:89] "kube-apiserver-addons-984173" [24699bd1-e12c-409d-a518-2986ee6304e4] Running
	I1123 08:59:11.530249  285663 system_pods.go:89] "kube-controller-manager-addons-984173" [db313608-3959-4244-9fdf-e032e049f063] Running
	I1123 08:59:11.530256  285663 system_pods.go:89] "kube-ingress-dns-minikube" [47acf22f-c161-46e7-8b97-692610b92f19] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:59:11.530272  285663 system_pods.go:89] "kube-proxy-wfr86" [161587c4-704b-4433-bd0d-df2bbce113bf] Running
	I1123 08:59:11.530278  285663 system_pods.go:89] "kube-scheduler-addons-984173" [b4634404-b4ec-4f19-a5a0-37e6063ffe91] Running
	I1123 08:59:11.530288  285663 system_pods.go:89] "metrics-server-85b7d694d7-q7k2v" [10e46f11-8afd-4338-abf6-90235104b38c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:59:11.530295  285663 system_pods.go:89] "nvidia-device-plugin-daemonset-brqdp" [3eb0354a-72da-4330-8b65-cdb7395b7a35] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:59:11.530304  285663 system_pods.go:89] "registry-6b586f9694-r7jl6" [30719118-851e-4542-a4f8-c89f68f6bd04] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:59:11.530311  285663 system_pods.go:89] "registry-creds-764b6fb674-lxww8" [5a528301-690c-4034-989a-9dd8b4c6b876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:59:11.530317  285663 system_pods.go:89] "registry-proxy-xt9vl" [0e2ac94e-9a8b-4407-8901-cdf4a4fdfc8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:59:11.530326  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gbxvb" [b0c3071e-fd3e-4174-a01b-9138498a07c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:11.530332  285663 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qrk99" [235dd6f3-7b70-4cd1-b640-d2d877b3a77c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:59:11.530346  285663 system_pods.go:89] "storage-provisioner" [fbafe21b-5d28-4c82-a702-3ac2a06c124d] Running
	I1123 08:59:11.530364  285663 system_pods.go:126] duration metric: took 1.653636605s to wait for k8s-apps to be running ...
	I1123 08:59:11.530377  285663 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:59:11.530442  285663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:59:11.543333  285663 system_svc.go:56] duration metric: took 12.934721ms WaitForService to wait for kubelet
	I1123 08:59:11.543368  285663 kubeadm.go:587] duration metric: took 43.127152055s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:59:11.543384  285663 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:59:11.546413  285663 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:59:11.546440  285663 node_conditions.go:123] node cpu capacity is 2
	I1123 08:59:11.546457  285663 node_conditions.go:105] duration metric: took 3.066199ms to run NodePressure ...
	I1123 08:59:11.546469  285663 start.go:242] waiting for startup goroutines ...
	I1123 08:59:11.557653  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:11.707453  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:11.807997  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:11.809210  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:12.060168  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:12.206330  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:12.304322  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:12.304443  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:12.558757  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:12.707705  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:12.789325  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:12.789786  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:13.057571  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:13.207274  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:13.288273  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:13.288659  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:13.562251  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:13.706525  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:13.809190  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:13.809602  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:14.059012  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:14.207595  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:14.287572  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:14.287953  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:14.558273  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:14.707819  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:14.788358  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:14.788765  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:15.057836  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:15.207113  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:15.287754  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:15.288245  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:15.558445  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:15.706378  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:15.787266  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:15.787397  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:16.065978  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:16.207467  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:16.288059  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:16.288425  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:16.558389  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:16.707697  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:16.787842  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:16.788241  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:17.058145  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:17.206218  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:17.286992  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:17.287471  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:17.558135  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:17.706682  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:17.786500  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:17.787439  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:18.057986  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:18.206413  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:18.287156  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:18.287624  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:18.558956  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:18.706411  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:18.786295  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:18.786500  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:19.057752  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:19.206440  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:19.286117  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:19.286651  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:19.559235  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:19.706521  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:19.786155  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:19.786879  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:20.057703  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:20.206836  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:20.287331  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:20.287753  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:20.558658  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:20.707274  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:20.787873  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:20.788392  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:21.058148  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:21.206355  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:21.285918  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:21.287245  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:21.557214  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:21.706623  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:21.786967  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:21.787247  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:22.057686  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:22.206870  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:22.285356  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:22.287159  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:22.557882  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:22.707609  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:22.786697  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:22.787128  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:23.057931  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:23.206898  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:23.285323  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:23.287686  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:23.557965  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:23.707041  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:23.786572  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:23.786720  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:24.059242  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:24.206201  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:24.285226  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:24.287927  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:24.558512  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:24.707000  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:24.786575  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:24.786704  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:25.057842  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:25.206976  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:25.285652  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:25.286782  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:25.557271  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:25.706613  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:25.788376  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:25.788770  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:26.058970  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:26.205847  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:26.284800  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:26.285789  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:26.558597  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:26.707662  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:26.809182  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:26.809573  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:27.060123  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:27.206624  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:27.287521  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:27.287683  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:27.558615  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:27.707383  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:27.787736  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:27.787771  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:28.058865  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:28.207081  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:28.284838  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:28.286990  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:28.557871  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:28.706559  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:28.787216  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:28.787671  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:29.057735  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:29.207085  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:29.286868  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:29.287771  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:29.558198  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:29.706853  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:29.799745  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:29.799904  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:30.058837  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:30.207203  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:30.287860  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:30.288237  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:30.558256  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:30.706212  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:30.787179  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:30.787761  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:31.057577  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:31.206896  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:31.285342  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:31.286060  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:31.557550  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:31.706409  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:31.786770  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:31.787164  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:32.059631  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:32.207229  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:32.286559  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:32.287386  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:32.558475  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:32.705812  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:32.785184  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:32.786980  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:33.057242  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:33.206159  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:33.286587  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:33.286804  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:33.558068  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:33.706155  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:33.786219  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:33.786849  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:34.057688  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:34.206480  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:34.291641  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:34.291796  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:34.557516  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:34.706908  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:34.808472  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:34.808647  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:35.058618  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:35.206564  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:35.287051  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:35.287368  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:35.558838  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:35.707169  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:35.787100  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:35.787468  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:36.058854  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:36.206050  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:36.286173  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:36.286304  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:36.558451  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:36.706403  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:36.785840  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:36.786876  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:37.057652  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:37.206603  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:37.286884  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:37.287182  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:37.557530  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:37.707156  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:37.785390  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:37.786199  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:38.058548  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:38.207544  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:38.287370  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:38.287747  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:38.557881  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:38.706223  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:38.786066  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:38.788535  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:39.058211  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:39.206836  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:39.288297  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:39.289632  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:39.558356  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:39.708254  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:39.787941  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:39.788304  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:40.066679  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:40.210972  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:40.287289  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:40.288045  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:40.558362  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:40.710048  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:40.822616  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:40.822803  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:41.057044  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:41.206774  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:41.286308  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:41.286944  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:41.571896  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:41.707196  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:41.789390  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:41.789860  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:42.058206  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:42.206752  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:42.308165  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:42.308526  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:42.558950  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:42.706597  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:42.807428  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:42.807841  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:43.068378  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:43.217823  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:43.286452  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:43.287006  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:43.561525  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:43.706759  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:43.786075  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:43.786794  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:44.062381  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:44.206639  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:44.286647  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:44.287003  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:44.562719  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:44.706950  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:44.786627  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:44.787071  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:45.068006  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:45.210150  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:45.288734  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:45.289436  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:45.563118  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:45.706185  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:45.787333  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:45.787542  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:46.058572  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:46.206889  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:46.287207  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:46.287859  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:46.557281  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:46.706563  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:46.808174  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:46.808551  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:47.057882  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:47.206981  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:47.285085  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:47.286336  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:47.557474  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:47.706386  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:47.785563  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:47.786240  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:48.058127  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:48.207056  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:48.286875  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:48.287145  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:48.557337  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:48.707085  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:48.786535  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:48.786841  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:49.057747  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:49.206601  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:49.289319  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:49.289359  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:49.557828  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:49.706595  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:49.786750  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:49.786876  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:50.057933  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:50.205917  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:50.285804  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:59:50.285985  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:50.557553  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:50.708066  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:50.786983  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:50.787337  285663 kapi.go:107] duration metric: took 1m16.006090155s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 08:59:51.058966  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:51.206151  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:51.286641  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:51.558469  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:51.706824  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:51.787245  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:52.057989  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:52.206216  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:52.286134  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:52.557065  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:52.706308  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:52.786351  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:53.057937  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:53.206223  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:53.288275  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:53.558202  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:53.709112  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:53.788369  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:54.064467  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:54.208290  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:54.289503  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:54.558431  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:54.706725  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:54.785598  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:55.060214  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:55.207018  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:55.287433  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:55.557666  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:55.707961  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:55.788247  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:56.057909  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:56.206368  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:56.286253  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:56.558191  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:56.707158  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:56.785971  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:57.059033  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:57.206127  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:57.286323  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:57.557542  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:57.706334  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:57.787247  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:58.059585  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:58.207340  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:59:58.289356  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:58.558343  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:58.709858  285663 kapi.go:107] duration metric: took 1m20.506831383s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 08:59:58.712704  285663 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-984173 cluster.
	I1123 08:59:58.715479  285663 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 08:59:58.719558  285663 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 08:59:58.786928  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:59.059119  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:59.286704  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:59:59.556837  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:59:59.785913  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:00.059350  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:00.346038  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:00.561370  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:00.794383  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:01.062573  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:01.317866  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:01.593710  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:01.789557  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:02.059389  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:02.287887  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:02.558527  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:02.787613  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:03.059061  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:03.287018  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:03.557399  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:03.786963  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:04.057812  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:04.285840  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:04.558134  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:04.787786  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:05.061177  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:05.287161  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:05.558534  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:05.786847  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:06.066618  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:06.288763  285663 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 09:00:06.560441  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:06.786963  285663 kapi.go:107] duration metric: took 1m32.004680286s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 09:00:07.057606  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:07.563413  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:08.088462  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:08.562658  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:09.058866  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:09.557767  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:10.058897  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:10.558438  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:11.060614  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:11.557623  285663 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 09:00:12.058231  285663 kapi.go:107] duration metric: took 1m37.004281849s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 09:00:12.061375  285663 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, storage-provisioner, inspektor-gadget, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1123 09:00:12.064266  285663 addons.go:530] duration metric: took 1m43.647604139s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin registry-creds storage-provisioner inspektor-gadget cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1123 09:00:12.064336  285663 start.go:247] waiting for cluster config update ...
	I1123 09:00:12.064360  285663 start.go:256] writing updated cluster config ...
	I1123 09:00:12.064669  285663 ssh_runner.go:195] Run: rm -f paused
	I1123 09:00:12.069513  285663 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:00:12.158061  285663 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d2nfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.163423  285663 pod_ready.go:94] pod "coredns-66bc5c9577-d2nfj" is "Ready"
	I1123 09:00:12.163452  285663 pod_ready.go:86] duration metric: took 5.363546ms for pod "coredns-66bc5c9577-d2nfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.166190  285663 pod_ready.go:83] waiting for pod "etcd-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.171073  285663 pod_ready.go:94] pod "etcd-addons-984173" is "Ready"
	I1123 09:00:12.171101  285663 pod_ready.go:86] duration metric: took 4.881119ms for pod "etcd-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.173593  285663 pod_ready.go:83] waiting for pod "kube-apiserver-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.178228  285663 pod_ready.go:94] pod "kube-apiserver-addons-984173" is "Ready"
	I1123 09:00:12.178258  285663 pod_ready.go:86] duration metric: took 4.637703ms for pod "kube-apiserver-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.180670  285663 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.473745  285663 pod_ready.go:94] pod "kube-controller-manager-addons-984173" is "Ready"
	I1123 09:00:12.473774  285663 pod_ready.go:86] duration metric: took 293.078777ms for pod "kube-controller-manager-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:12.674016  285663 pod_ready.go:83] waiting for pod "kube-proxy-wfr86" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:13.076556  285663 pod_ready.go:94] pod "kube-proxy-wfr86" is "Ready"
	I1123 09:00:13.076595  285663 pod_ready.go:86] duration metric: took 402.557863ms for pod "kube-proxy-wfr86" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:13.274256  285663 pod_ready.go:83] waiting for pod "kube-scheduler-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:13.673573  285663 pod_ready.go:94] pod "kube-scheduler-addons-984173" is "Ready"
	I1123 09:00:13.673606  285663 pod_ready.go:86] duration metric: took 399.318929ms for pod "kube-scheduler-addons-984173" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:13.673621  285663 pod_ready.go:40] duration metric: took 1.604071389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:00:13.747374  285663 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:00:13.753917  285663 out.go:179] * Done! kubectl is now configured to use "addons-984173" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:00:41 addons-984173 crio[828]: time="2025-11-23T09:00:41.73332676Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:41 addons-984173 crio[828]: time="2025-11-23T09:00:41.734019413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:41 addons-984173 crio[828]: time="2025-11-23T09:00:41.751191925Z" level=info msg="Created container 370d0875bbd29de0dce5615d3ad31bb41db05ca20319269a6379a5e9c5686d4e: default/test-local-path/busybox" id=a82b9360-2048-4664-b5b9-efb3cf227c4f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:00:41 addons-984173 crio[828]: time="2025-11-23T09:00:41.7524212Z" level=info msg="Starting container: 370d0875bbd29de0dce5615d3ad31bb41db05ca20319269a6379a5e9c5686d4e" id=2244b905-c764-4d50-8f85-aa6758e705e6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:00:41 addons-984173 crio[828]: time="2025-11-23T09:00:41.754636021Z" level=info msg="Started container" PID=5335 containerID=370d0875bbd29de0dce5615d3ad31bb41db05ca20319269a6379a5e9c5686d4e description=default/test-local-path/busybox id=2244b905-c764-4d50-8f85-aa6758e705e6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bf594c5d83fb3a8dab364c9f2beb6b14dd7e75b97d687f21fac1950285dd94d4
	Nov 23 09:00:42 addons-984173 crio[828]: time="2025-11-23T09:00:42.876909353Z" level=info msg="Stopping pod sandbox: bf594c5d83fb3a8dab364c9f2beb6b14dd7e75b97d687f21fac1950285dd94d4" id=2a7e6647-943b-4a2e-843d-3e81c9b24b6d name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 09:00:42 addons-984173 crio[828]: time="2025-11-23T09:00:42.877201476Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:bf594c5d83fb3a8dab364c9f2beb6b14dd7e75b97d687f21fac1950285dd94d4 UID:407e72e2-b184-4ee7-b8b9-89c11db88585 NetNS:/var/run/netns/90632bdf-3797-407f-a8fb-27a30db9f412 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000788e8}] Aliases:map[]}"
	Nov 23 09:00:42 addons-984173 crio[828]: time="2025-11-23T09:00:42.877371021Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Nov 23 09:00:42 addons-984173 crio[828]: time="2025-11-23T09:00:42.914478491Z" level=info msg="Stopped pod sandbox: bf594c5d83fb3a8dab364c9f2beb6b14dd7e75b97d687f21fac1950285dd94d4" id=2a7e6647-943b-4a2e-843d-3e81c9b24b6d name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.001723702Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49/POD" id=3f34cd69-4048-4751-8c7c-857d2df28035 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.001813262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.040467672Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49 Namespace:local-path-storage ID:8556558ac34d3f29b5db01d04406eb23417c8b02bac9bc69686a183df7e8b180 UID:e1b3a0c4-51db-4184-b12c-737c29d351fc NetNS:/var/run/netns/527c9cf0-34e8-458b-89dc-559eff712e9b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079380}] Aliases:map[]}"
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.04051093Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49 to CNI network \"kindnet\" (type=ptp)"
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.087214425Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49 Namespace:local-path-storage ID:8556558ac34d3f29b5db01d04406eb23417c8b02bac9bc69686a183df7e8b180 UID:e1b3a0c4-51db-4184-b12c-737c29d351fc NetNS:/var/run/netns/527c9cf0-34e8-458b-89dc-559eff712e9b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079380}] Aliases:map[]}"
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.08741711Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49 for CNI network kindnet (type=ptp)"
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.092750288Z" level=info msg="Ran pod sandbox 8556558ac34d3f29b5db01d04406eb23417c8b02bac9bc69686a183df7e8b180 with infra container: local-path-storage/helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49/POD" id=3f34cd69-4048-4751-8c7c-857d2df28035 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.095647407Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=80f1d74d-e52b-4d31-8cc3-272144329822 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.098123893Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=08976ab8-215a-4d50-9bd4-b04880ed6113 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.110321459Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49/helper-pod" id=57bef939-94c0-48b7-9de2-d032ddeb6aa4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.110648249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.138426816Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.141721618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.186303084Z" level=info msg="Created container 928c8264d4eb6cd637b3e544fff7d31893ea566f88664615bcbf79948c831f19: local-path-storage/helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49/helper-pod" id=57bef939-94c0-48b7-9de2-d032ddeb6aa4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.187487977Z" level=info msg="Starting container: 928c8264d4eb6cd637b3e544fff7d31893ea566f88664615bcbf79948c831f19" id=8384b50e-552d-460d-81c6-43dfbc3751d9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:00:45 addons-984173 crio[828]: time="2025-11-23T09:00:45.199379578Z" level=info msg="Started container" PID=5505 containerID=928c8264d4eb6cd637b3e544fff7d31893ea566f88664615bcbf79948c831f19 description=local-path-storage/helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49/helper-pod id=8384b50e-552d-460d-81c6-43dfbc3751d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8556558ac34d3f29b5db01d04406eb23417c8b02bac9bc69686a183df7e8b180
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	928c8264d4eb6       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             Less than a second ago   Exited              helper-pod                               0                   8556558ac34d3       helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49   local-path-storage
	370d0875bbd29       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago            Exited              busybox                                  0                   bf594c5d83fb3       test-local-path                                              default
	8639aa77e2837       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago            Exited              helper-pod                               0                   0c4a3ae5b2198       helper-pod-create-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49   local-path-storage
	09f6618d96f99       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          9 seconds ago            Exited              registry-test                            0                   ca2d86ff130a5       registry-test                                                default
	abf50150c6b0d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          28 seconds ago           Running             busybox                                  0                   795e8ad4bf31f       busybox                                                      default
	742ade421fb24       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          34 seconds ago           Running             csi-snapshotter                          0                   f616696f74e87       csi-hostpathplugin-2kj78                                     kube-system
	f6783f9da9552       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          35 seconds ago           Running             csi-provisioner                          0                   f616696f74e87       csi-hostpathplugin-2kj78                                     kube-system
	37d6af059fa8d       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            37 seconds ago           Running             liveness-probe                           0                   f616696f74e87       csi-hostpathplugin-2kj78                                     kube-system
	66657c8a6cec5       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           38 seconds ago           Running             hostpath                                 0                   f616696f74e87       csi-hostpathplugin-2kj78                                     kube-system
	497989a5477b2       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             39 seconds ago           Running             controller                               0                   b3ca61edb8e18       ingress-nginx-controller-6c8bf45fb-gr75s                     ingress-nginx
	4a940b19c91cb       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 47 seconds ago           Running             gcp-auth                                 0                   283e348dea31b       gcp-auth-78565c9fb4-ks57h                                    gcp-auth
	8586599f3919f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                50 seconds ago           Running             node-driver-registrar                    0                   f616696f74e87       csi-hostpathplugin-2kj78                                     kube-system
	96443e9c408d8       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            52 seconds ago           Running             gadget                                   0                   673e7e0ed84f5       gadget-7lvml                                                 gadget
	957a25f0a87eb       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             52 seconds ago           Exited              patch                                    2                   77ed41f1a8977       ingress-nginx-admission-patch-dhzqh                          ingress-nginx
	6f902ae88d97e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              55 seconds ago           Running             registry-proxy                           0                   2aa485ec26bc9       registry-proxy-xt9vl                                         kube-system
	75511f019181b       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     59 seconds ago           Running             nvidia-device-plugin-ctr                 0                   33d5e9c0058db       nvidia-device-plugin-daemonset-brqdp                         kube-system
	de8e74b6f79cb       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago       Running             volume-snapshot-controller               0                   58a50852f3c21       snapshot-controller-7d9fbc56b8-qrk99                         kube-system
	f4b7a6278f7aa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago       Exited              create                                   0                   ea2db8b3a1fa7       ingress-nginx-admission-create-t4d4b                         ingress-nginx
	575e9ea051577       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago       Running             registry                                 0                   4fd4fc271ae14       registry-6b586f9694-r7jl6                                    kube-system
	2b31531176241       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago       Running             csi-external-health-monitor-controller   0                   f616696f74e87       csi-hostpathplugin-2kj78                                     kube-system
	bbd54f9144620       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago       Running             csi-resizer                              0                   637b5019f023b       csi-hostpath-resizer-0                                       kube-system
	3c3749cfa9b1e       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago       Running             csi-attacher                             0                   7f514ef75329f       csi-hostpath-attacher-0                                      kube-system
	f93636a2eb282       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago       Running             volume-snapshot-controller               0                   76497357f2e3d       snapshot-controller-7d9fbc56b8-gbxvb                         kube-system
	5f99b88dae427       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago       Running             yakd                                     0                   df984c1630bcd       yakd-dashboard-5ff678cb9-8c2d4                               yakd-dashboard
	8f1edccdddb80       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago       Running             minikube-ingress-dns                     0                   c3091ae4d59d7       kube-ingress-dns-minikube                                    kube-system
	d833df8b1059c       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago       Running             cloud-spanner-emulator                   0                   d48f1904d4f6d       cloud-spanner-emulator-5bdddb765-272hq                       default
	27c8f23d0b241       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago       Running             local-path-provisioner                   0                   61cee50775478       local-path-provisioner-648f6765c9-psfzp                      local-path-storage
	1559bd52645fb       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago       Running             metrics-server                           0                   d26d6967363f5       metrics-server-85b7d694d7-q7k2v                              kube-system
	6c78922b69b65       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago       Running             coredns                                  0                   e09e45ceae81a       coredns-66bc5c9577-d2nfj                                     kube-system
	de914953e20a9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago       Running             storage-provisioner                      0                   ecea510f4a783       storage-provisioner                                          kube-system
	87bae25a4298b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago            Running             kube-proxy                               0                   2123de48ed60f       kube-proxy-wfr86                                             kube-system
	529e3e6584de1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago            Running             kindnet-cni                              0                   624c4ba5d4732       kindnet-694tf                                                kube-system
	22aab316066d2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago            Running             kube-controller-manager                  0                   1b6718cc289ba       kube-controller-manager-addons-984173                        kube-system
	d9e34f2271d2d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago            Running             etcd                                     0                   515260730e8da       etcd-addons-984173                                           kube-system
	61a76b638e0c8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago            Running             kube-scheduler                           0                   93de3b7164280       kube-scheduler-addons-984173                                 kube-system
	126a521cf3c9c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago            Running             kube-apiserver                           0                   621e70c0d457e       kube-apiserver-addons-984173                                 kube-system
	
	
	==> coredns [6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da] <==
	[INFO] 10.244.0.18:45567 - 55019 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.031670232s
	[INFO] 10.244.0.18:45567 - 34008 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000125925s
	[INFO] 10.244.0.18:45567 - 19766 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00016385s
	[INFO] 10.244.0.18:44761 - 54278 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150566s
	[INFO] 10.244.0.18:44761 - 54073 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000216241s
	[INFO] 10.244.0.18:60954 - 33700 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109335s
	[INFO] 10.244.0.18:60954 - 33511 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000181032s
	[INFO] 10.244.0.18:37864 - 1551 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107586s
	[INFO] 10.244.0.18:37864 - 1371 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095196s
	[INFO] 10.244.0.18:46406 - 7599 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002389786s
	[INFO] 10.244.0.18:46406 - 7385 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00240705s
	[INFO] 10.244.0.18:45602 - 17654 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000181474s
	[INFO] 10.244.0.18:45602 - 17834 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000242711s
	[INFO] 10.244.0.20:58756 - 4153 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00021757s
	[INFO] 10.244.0.20:49572 - 9940 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000090742s
	[INFO] 10.244.0.20:41829 - 53885 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000168462s
	[INFO] 10.244.0.20:33570 - 46025 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000281792s
	[INFO] 10.244.0.20:40211 - 51999 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116243s
	[INFO] 10.244.0.20:60296 - 54940 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000302904s
	[INFO] 10.244.0.20:41203 - 42266 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003383725s
	[INFO] 10.244.0.20:42253 - 40498 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002493548s
	[INFO] 10.244.0.20:39723 - 9600 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001181698s
	[INFO] 10.244.0.20:45445 - 48542 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001976333s
	[INFO] 10.244.0.23:40262 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000176051s
	[INFO] 10.244.0.23:59836 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101852s
	
	
	==> describe nodes <==
	Name:               addons-984173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-984173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=addons-984173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_58_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-984173
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-984173"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:58:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-984173
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:00:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:00:35 +0000   Sun, 23 Nov 2025 08:58:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:00:35 +0000   Sun, 23 Nov 2025 08:58:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:00:35 +0000   Sun, 23 Nov 2025 08:58:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:00:35 +0000   Sun, 23 Nov 2025 08:59:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-984173
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                6a936c40-0715-486d-ba6b-a609979f7ac2
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  default                     cloud-spanner-emulator-5bdddb765-272hq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  gadget                      gadget-7lvml                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  gcp-auth                    gcp-auth-78565c9fb4-ks57h                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-gr75s                      100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m12s
	  kube-system                 coredns-66bc5c9577-d2nfj                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 csi-hostpathplugin-2kj78                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 etcd-addons-984173                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-694tf                                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-addons-984173                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-addons-984173                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-proxy-wfr86                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-addons-984173                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 metrics-server-85b7d694d7-q7k2v                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m14s
	  kube-system                 nvidia-device-plugin-daemonset-brqdp                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 registry-6b586f9694-r7jl6                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 registry-creds-764b6fb674-lxww8                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 registry-proxy-xt9vl                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 snapshot-controller-7d9fbc56b8-gbxvb                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 snapshot-controller-7d9fbc56b8-qrk99                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  local-path-storage          helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-648f6765c9-psfzp                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8c2d4                                0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m16s  kube-proxy       
	  Normal   Starting                 2m24s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m24s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s  kubelet          Node addons-984173 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m23s  kubelet          Node addons-984173 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m23s  kubelet          Node addons-984173 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m19s  node-controller  Node addons-984173 event: Registered Node addons-984173 in Controller
	  Normal   NodeReady                97s    kubelet          Node addons-984173 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015154] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511595] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034200] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753844] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.833249] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:37] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	[Nov23 08:56] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	[  +0.083595] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26] <==
	{"level":"warn","ts":"2025-11-23T08:58:19.155842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.166418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.180317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.198133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.213826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.243884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.253850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.269727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.286040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.297897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.326953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.338338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.351345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.372997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.390196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.433637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.466281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.494123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:19.585924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:35.275219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:35.280014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:57.277002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:57.294553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:57.319200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:58:57.333017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45510","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [4a940b19c91cba536d1faf436f7e4cf75e22128cd7881abb3fc0d5bdca59149d] <==
	2025/11/23 08:59:58 GCP Auth Webhook started!
	2025/11/23 09:00:14 Ready to marshal response ...
	2025/11/23 09:00:14 Ready to write response ...
	2025/11/23 09:00:14 Ready to marshal response ...
	2025/11/23 09:00:14 Ready to write response ...
	2025/11/23 09:00:14 Ready to marshal response ...
	2025/11/23 09:00:14 Ready to write response ...
	2025/11/23 09:00:33 Ready to marshal response ...
	2025/11/23 09:00:33 Ready to write response ...
	2025/11/23 09:00:35 Ready to marshal response ...
	2025/11/23 09:00:35 Ready to write response ...
	2025/11/23 09:00:35 Ready to marshal response ...
	2025/11/23 09:00:35 Ready to write response ...
	2025/11/23 09:00:44 Ready to marshal response ...
	2025/11/23 09:00:44 Ready to write response ...
	
	
	==> kernel <==
	 09:00:46 up  1:43,  0 user,  load average: 2.12, 2.66, 3.23
	Linux addons-984173 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8] <==
	E1123 08:58:59.460125       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:58:59.460206       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1123 08:59:00.860588       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:59:00.860626       1 metrics.go:72] Registering metrics
	I1123 08:59:00.860677       1 controller.go:711] "Syncing nftables rules"
	I1123 08:59:09.465475       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:59:09.465531       1 main.go:301] handling current node
	I1123 08:59:19.463146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:59:19.463181       1 main.go:301] handling current node
	I1123 08:59:29.459875       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:59:29.459905       1 main.go:301] handling current node
	I1123 08:59:39.458725       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:59:39.458789       1 main.go:301] handling current node
	I1123 08:59:49.459662       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:59:49.459741       1 main.go:301] handling current node
	I1123 08:59:59.459483       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:59:59.459528       1 main.go:301] handling current node
	I1123 09:00:09.458937       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:00:09.458991       1 main.go:301] handling current node
	I1123 09:00:19.459358       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:00:19.459388       1 main.go:301] handling current node
	I1123 09:00:29.467392       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:00:29.467500       1 main.go:301] handling current node
	I1123 09:00:39.458877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:00:39.458937       1 main.go:301] handling current node
	
	
	==> kube-apiserver [126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90] <==
	E1123 08:59:09.881284       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.186.148:443: connect: connection refused" logger="UnhandledError"
	E1123 08:59:24.428756       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.217.66:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.217.66:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.217.66:443: connect: connection refused" logger="UnhandledError"
	W1123 08:59:24.428925       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 08:59:24.428987       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1123 08:59:25.430483       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 08:59:25.430532       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1123 08:59:25.430545       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1123 08:59:25.430580       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 08:59:25.430635       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1123 08:59:25.431744       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1123 08:59:29.442023       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 08:59:29.442165       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 08:59:29.443943       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.217.66:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.217.66:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1123 08:59:29.530836       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1123 09:00:23.074640       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43458: use of closed network connection
	E1123 09:00:23.514952       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43504: use of closed network connection
	
	
	==> kube-controller-manager [22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca] <==
	I1123 08:58:27.280010       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:58:27.289532       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:58:27.291397       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:58:27.291423       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:58:27.291435       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:58:27.305524       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:58:27.306850       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-984173" podCIDRs=["10.244.0.0/24"]
	I1123 08:58:27.307846       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:58:27.307922       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:58:27.308431       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:58:27.310098       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:58:27.316282       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:58:27.318647       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:58:27.321372       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1123 08:58:32.788010       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1123 08:58:57.269888       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 08:58:57.270055       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1123 08:58:57.270106       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 08:58:57.307495       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1123 08:58:57.312040       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 08:58:57.371156       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:58:57.412595       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:59:12.265511       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1123 08:59:27.376824       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 08:59:27.425727       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c] <==
	I1123 08:58:29.300494       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:58:29.402130       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:58:29.503139       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:58:29.503177       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 08:58:29.503258       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:58:29.548128       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:58:29.548173       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:58:29.553177       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:58:29.553643       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:58:29.553657       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:58:29.558226       1 config.go:200] "Starting service config controller"
	I1123 08:58:29.558244       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:58:29.558279       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:58:29.558284       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:58:29.558296       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:58:29.558300       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:58:29.559074       1 config.go:309] "Starting node config controller"
	I1123 08:58:29.559081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:58:29.559089       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:58:29.658771       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:58:29.658841       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:58:29.659126       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c] <==
	E1123 08:58:20.385246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:58:20.385372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:58:20.385496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:58:20.385599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:58:20.385699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:58:20.388597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 08:58:20.389029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:58:20.389154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:58:20.389236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:58:20.389304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:58:20.389335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:58:20.389430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:58:20.389552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:58:20.389634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:58:21.206573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:58:21.240078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:58:21.240078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:58:21.454852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:58:21.462287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:58:21.493397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:58:21.534902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:58:21.556565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:58:21.605746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:58:21.623027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1123 08:58:21.942121       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:00:39 addons-984173 kubelet[1283]: I1123 09:00:39.084755    1283 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5a644596-477b-4ad5-9482-33b1a1453f8b-gcp-creds\") on node \"addons-984173\" DevicePath \"\""
	Nov 23 09:00:39 addons-984173 kubelet[1283]: I1123 09:00:39.084843    1283 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/5a644596-477b-4ad5-9482-33b1a1453f8b-script\") on node \"addons-984173\" DevicePath \"\""
	Nov 23 09:00:39 addons-984173 kubelet[1283]: I1123 09:00:39.865266    1283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c4a3ae5b219840abec82ed3287e3cb4a0bf0d083ba05d0391b2784853b3bd9c"
	Nov 23 09:00:39 addons-984173 kubelet[1283]: I1123 09:00:39.892861    1283 status_manager.go:1073] "Failed to delete status for pod" pod="local-path-storage/helper-pod-create-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49" err="pods \"helper-pod-create-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49\" not found"
	Nov 23 09:00:40 addons-984173 kubelet[1283]: I1123 09:00:40.798851    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/407e72e2-b184-4ee7-b8b9-89c11db88585-gcp-creds\") pod \"test-local-path\" (UID: \"407e72e2-b184-4ee7-b8b9-89c11db88585\") " pod="default/test-local-path"
	Nov 23 09:00:40 addons-984173 kubelet[1283]: I1123 09:00:40.798939    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhsr6\" (UniqueName: \"kubernetes.io/projected/407e72e2-b184-4ee7-b8b9-89c11db88585-kube-api-access-jhsr6\") pod \"test-local-path\" (UID: \"407e72e2-b184-4ee7-b8b9-89c11db88585\") " pod="default/test-local-path"
	Nov 23 09:00:40 addons-984173 kubelet[1283]: I1123 09:00:40.799045    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49\" (UniqueName: \"kubernetes.io/host-path/407e72e2-b184-4ee7-b8b9-89c11db88585-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49\") pod \"test-local-path\" (UID: \"407e72e2-b184-4ee7-b8b9-89c11db88585\") " pod="default/test-local-path"
	Nov 23 09:00:41 addons-984173 kubelet[1283]: I1123 09:00:41.022543    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a644596-477b-4ad5-9482-33b1a1453f8b" path="/var/lib/kubelet/pods/5a644596-477b-4ad5-9482-33b1a1453f8b/volumes"
	Nov 23 09:00:41 addons-984173 kubelet[1283]: W1123 09:00:41.070483    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c/crio-bf594c5d83fb3a8dab364c9f2beb6b14dd7e75b97d687f21fac1950285dd94d4 WatchSource:0}: Error finding container bf594c5d83fb3a8dab364c9f2beb6b14dd7e75b97d687f21fac1950285dd94d4: Status 404 returned error can't find the container with id bf594c5d83fb3a8dab364c9f2beb6b14dd7e75b97d687f21fac1950285dd94d4
	Nov 23 09:00:43 addons-984173 kubelet[1283]: I1123 09:00:43.021619    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/407e72e2-b184-4ee7-b8b9-89c11db88585-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49\") pod \"407e72e2-b184-4ee7-b8b9-89c11db88585\" (UID: \"407e72e2-b184-4ee7-b8b9-89c11db88585\") "
	Nov 23 09:00:43 addons-984173 kubelet[1283]: I1123 09:00:43.022258    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhsr6\" (UniqueName: \"kubernetes.io/projected/407e72e2-b184-4ee7-b8b9-89c11db88585-kube-api-access-jhsr6\") pod \"407e72e2-b184-4ee7-b8b9-89c11db88585\" (UID: \"407e72e2-b184-4ee7-b8b9-89c11db88585\") "
	Nov 23 09:00:43 addons-984173 kubelet[1283]: I1123 09:00:43.022930    1283 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/407e72e2-b184-4ee7-b8b9-89c11db88585-gcp-creds\") pod \"407e72e2-b184-4ee7-b8b9-89c11db88585\" (UID: \"407e72e2-b184-4ee7-b8b9-89c11db88585\") "
	Nov 23 09:00:43 addons-984173 kubelet[1283]: I1123 09:00:43.022171    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/407e72e2-b184-4ee7-b8b9-89c11db88585-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49" (OuterVolumeSpecName: "data") pod "407e72e2-b184-4ee7-b8b9-89c11db88585" (UID: "407e72e2-b184-4ee7-b8b9-89c11db88585"). InnerVolumeSpecName "pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 23 09:00:43 addons-984173 kubelet[1283]: I1123 09:00:43.023212    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/407e72e2-b184-4ee7-b8b9-89c11db88585-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "407e72e2-b184-4ee7-b8b9-89c11db88585" (UID: "407e72e2-b184-4ee7-b8b9-89c11db88585"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 23 09:00:43 addons-984173 kubelet[1283]: I1123 09:00:43.030957    1283 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/407e72e2-b184-4ee7-b8b9-89c11db88585-kube-api-access-jhsr6" (OuterVolumeSpecName: "kube-api-access-jhsr6") pod "407e72e2-b184-4ee7-b8b9-89c11db88585" (UID: "407e72e2-b184-4ee7-b8b9-89c11db88585"). InnerVolumeSpecName "kube-api-access-jhsr6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 23 09:00:43 addons-984173 kubelet[1283]: I1123 09:00:43.124114    1283 reconciler_common.go:299] "Volume detached for volume \"pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49\" (UniqueName: \"kubernetes.io/host-path/407e72e2-b184-4ee7-b8b9-89c11db88585-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49\") on node \"addons-984173\" DevicePath \"\""
	Nov 23 09:00:43 addons-984173 kubelet[1283]: I1123 09:00:43.124303    1283 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jhsr6\" (UniqueName: \"kubernetes.io/projected/407e72e2-b184-4ee7-b8b9-89c11db88585-kube-api-access-jhsr6\") on node \"addons-984173\" DevicePath \"\""
	Nov 23 09:00:43 addons-984173 kubelet[1283]: I1123 09:00:43.124385    1283 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/407e72e2-b184-4ee7-b8b9-89c11db88585-gcp-creds\") on node \"addons-984173\" DevicePath \"\""
	Nov 23 09:00:43 addons-984173 kubelet[1283]: I1123 09:00:43.883106    1283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf594c5d83fb3a8dab364c9f2beb6b14dd7e75b97d687f21fac1950285dd94d4"
	Nov 23 09:00:44 addons-984173 kubelet[1283]: I1123 09:00:44.853573    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e1b3a0c4-51db-4184-b12c-737c29d351fc-gcp-creds\") pod \"helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49\" (UID: \"e1b3a0c4-51db-4184-b12c-737c29d351fc\") " pod="local-path-storage/helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49"
	Nov 23 09:00:44 addons-984173 kubelet[1283]: I1123 09:00:44.854159    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/e1b3a0c4-51db-4184-b12c-737c29d351fc-script\") pod \"helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49\" (UID: \"e1b3a0c4-51db-4184-b12c-737c29d351fc\") " pod="local-path-storage/helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49"
	Nov 23 09:00:44 addons-984173 kubelet[1283]: I1123 09:00:44.854276    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z8d4\" (UniqueName: \"kubernetes.io/projected/e1b3a0c4-51db-4184-b12c-737c29d351fc-kube-api-access-6z8d4\") pod \"helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49\" (UID: \"e1b3a0c4-51db-4184-b12c-737c29d351fc\") " pod="local-path-storage/helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49"
	Nov 23 09:00:44 addons-984173 kubelet[1283]: I1123 09:00:44.854383    1283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/e1b3a0c4-51db-4184-b12c-737c29d351fc-data\") pod \"helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49\" (UID: \"e1b3a0c4-51db-4184-b12c-737c29d351fc\") " pod="local-path-storage/helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49"
	Nov 23 09:00:45 addons-984173 kubelet[1283]: I1123 09:00:45.028283    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="407e72e2-b184-4ee7-b8b9-89c11db88585" path="/var/lib/kubelet/pods/407e72e2-b184-4ee7-b8b9-89c11db88585/volumes"
	Nov 23 09:00:45 addons-984173 kubelet[1283]: W1123 09:00:45.091304    1283 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/733ef088474c1ca5232d1d6b09cd8c9ee16bbc4b48105a9d06ca2a60a0c09e3c/crio-8556558ac34d3f29b5db01d04406eb23417c8b02bac9bc69686a183df7e8b180 WatchSource:0}: Error finding container 8556558ac34d3f29b5db01d04406eb23417c8b02bac9bc69686a183df7e8b180: Status 404 returned error can't find the container with id 8556558ac34d3f29b5db01d04406eb23417c8b02bac9bc69686a183df7e8b180
	
	
	==> storage-provisioner [de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74] <==
	W1123 09:00:21.093291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:23.097297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:23.111412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:25.115391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:25.120594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:27.124458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:27.129204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:29.132516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:29.137843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:31.141967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:31.149734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:33.152481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:33.157319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:35.160773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:35.166869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:37.171078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:37.177957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:39.181272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:39.186393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:41.190013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:41.197012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:43.200495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:43.204952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:45.233285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:45.291764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-984173 -n addons-984173
helpers_test.go:269: (dbg) Run:  kubectl --context addons-984173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-t4d4b ingress-nginx-admission-patch-dhzqh registry-creds-764b6fb674-lxww8 helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-984173 describe pod ingress-nginx-admission-create-t4d4b ingress-nginx-admission-patch-dhzqh registry-creds-764b6fb674-lxww8 helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-984173 describe pod ingress-nginx-admission-create-t4d4b ingress-nginx-admission-patch-dhzqh registry-creds-764b6fb674-lxww8 helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49: exit status 1 (102.072827ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-t4d4b" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dhzqh" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-lxww8" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-984173 describe pod ingress-nginx-admission-create-t4d4b ingress-nginx-admission-patch-dhzqh registry-creds-764b6fb674-lxww8 helper-pod-delete-pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable headlamp --alsologtostderr -v=1: exit status 11 (261.46953ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:00:47.452694  292979 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:47.453568  292979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:47.453610  292979 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:47.453634  292979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:47.454026  292979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:00:47.455025  292979 mustload.go:66] Loading cluster: addons-984173
	I1123 09:00:47.455495  292979 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:47.455518  292979 addons.go:622] checking whether the cluster is paused
	I1123 09:00:47.455717  292979 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:47.455737  292979 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:00:47.456308  292979 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:00:47.474286  292979 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:47.474354  292979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:00:47.495742  292979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:00:47.607661  292979 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:00:47.607749  292979 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:00:47.636268  292979 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:00:47.636291  292979 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:00:47.636296  292979 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:00:47.636301  292979 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:00:47.636304  292979 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:00:47.636307  292979 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:00:47.636310  292979 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:00:47.636313  292979 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:00:47.636316  292979 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:00:47.636323  292979 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:00:47.636326  292979 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:00:47.636329  292979 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:00:47.636332  292979 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:00:47.636336  292979 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:00:47.636339  292979 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:00:47.636345  292979 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:00:47.636348  292979 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:00:47.636353  292979 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:00:47.636356  292979 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:00:47.636359  292979 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:00:47.636373  292979 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:00:47.636382  292979 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:00:47.636387  292979 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:00:47.636390  292979 cri.go:89] found id: ""
	I1123 09:00:47.636439  292979 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:47.651522  292979 out.go:203] 
	W1123 09:00:47.654641  292979 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:00:47.654670  292979 out.go:285] * 
	* 
	W1123 09:00:47.660941  292979 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:00:47.663881  292979 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-272hq" [f04c89e2-94f7-45b8-84df-00b5e74ff5cf] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003425357s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (274.049537ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:00:43.510018  292279 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:43.510821  292279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:43.510874  292279 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:43.510888  292279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:43.511204  292279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:00:43.511607  292279 mustload.go:66] Loading cluster: addons-984173
	I1123 09:00:43.512112  292279 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:43.512133  292279 addons.go:622] checking whether the cluster is paused
	I1123 09:00:43.512313  292279 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:43.512331  292279 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:00:43.513040  292279 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:00:43.531096  292279 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:43.531157  292279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:00:43.559213  292279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:00:43.664373  292279 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:00:43.664457  292279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:00:43.700438  292279 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:00:43.700462  292279 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:00:43.700467  292279 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:00:43.700472  292279 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:00:43.700475  292279 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:00:43.700478  292279 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:00:43.700482  292279 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:00:43.700485  292279 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:00:43.700488  292279 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:00:43.700494  292279 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:00:43.700497  292279 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:00:43.700505  292279 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:00:43.700513  292279 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:00:43.700516  292279 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:00:43.700519  292279 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:00:43.700524  292279 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:00:43.700532  292279 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:00:43.700536  292279 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:00:43.700539  292279 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:00:43.700542  292279 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:00:43.700547  292279 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:00:43.700550  292279 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:00:43.700553  292279 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:00:43.700556  292279 cri.go:89] found id: ""
	I1123 09:00:43.700604  292279 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:43.718580  292279 out.go:203] 
	W1123 09:00:43.721495  292279 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:00:43.721527  292279 out.go:285] * 
	* 
	W1123 09:00:43.727822  292279 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:00:43.731423  292279 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.81s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-984173 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-984173 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-984173 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [407e72e2-b184-4ee7-b8b9-89c11db88585] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [407e72e2-b184-4ee7-b8b9-89c11db88585] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [407e72e2-b184-4ee7-b8b9-89c11db88585] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003446538s
addons_test.go:967: (dbg) Run:  kubectl --context addons-984173 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 ssh "cat /opt/local-path-provisioner/pvc-f8972cbb-14b0-4c4c-b4dd-700e676acc49_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-984173 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-984173 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (466.891413ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:00:44.817117  292541 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:44.817907  292541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:44.817950  292541 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:44.817970  292541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:44.818258  292541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:00:44.818633  292541 mustload.go:66] Loading cluster: addons-984173
	I1123 09:00:44.819056  292541 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:44.819093  292541 addons.go:622] checking whether the cluster is paused
	I1123 09:00:44.819221  292541 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:44.819245  292541 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:00:44.819865  292541 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:00:44.842156  292541 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:44.842222  292541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:00:44.865480  292541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:00:44.988803  292541 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:00:44.988889  292541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:00:45.107480  292541 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:00:45.107503  292541 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:00:45.107509  292541 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:00:45.107513  292541 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:00:45.107517  292541 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:00:45.107521  292541 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:00:45.107525  292541 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:00:45.107529  292541 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:00:45.107532  292541 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:00:45.107539  292541 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:00:45.107543  292541 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:00:45.107546  292541 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:00:45.107562  292541 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:00:45.107566  292541 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:00:45.107569  292541 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:00:45.107574  292541 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:00:45.107577  292541 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:00:45.107582  292541 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:00:45.107585  292541 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:00:45.107588  292541 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:00:45.107595  292541 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:00:45.107599  292541 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:00:45.107601  292541 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:00:45.107605  292541 cri.go:89] found id: ""
	I1123 09:00:45.107669  292541 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:45.164721  292541 out.go:203] 
	W1123 09:00:45.175171  292541 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:00:45.175321  292541 out.go:285] * 
	* 
	W1123 09:00:45.184938  292541 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:00:45.188645  292541 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.81s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-brqdp" [3eb0354a-72da-4330-8b65-cdb7395b7a35] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.009676364s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (303.923318ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:00:35.144168  291940 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:35.144885  291940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:35.144904  291940 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:35.144912  291940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:35.145213  291940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:00:35.145592  291940 mustload.go:66] Loading cluster: addons-984173
	I1123 09:00:35.145996  291940 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:35.146015  291940 addons.go:622] checking whether the cluster is paused
	I1123 09:00:35.146125  291940 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:35.146140  291940 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:00:35.146654  291940 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:00:35.174846  291940 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:35.174904  291940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:00:35.195729  291940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:00:35.316332  291940 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:00:35.316426  291940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:00:35.350774  291940 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:00:35.350800  291940 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:00:35.350806  291940 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:00:35.350810  291940 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:00:35.350813  291940 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:00:35.350817  291940 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:00:35.350820  291940 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:00:35.350823  291940 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:00:35.350825  291940 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:00:35.350832  291940 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:00:35.350835  291940 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:00:35.350838  291940 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:00:35.350841  291940 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:00:35.350844  291940 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:00:35.350848  291940 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:00:35.350856  291940 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:00:35.350864  291940 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:00:35.350869  291940 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:00:35.350872  291940 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:00:35.350875  291940 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:00:35.350880  291940 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:00:35.350886  291940 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:00:35.350890  291940 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:00:35.350893  291940 cri.go:89] found id: ""
	I1123 09:00:35.350945  291940 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:35.366647  291940 out.go:203] 
	W1123 09:00:35.369580  291940 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:00:35.369605  291940 out.go:285] * 
	* 
	W1123 09:00:35.375977  291940 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:00:35.379008  291940 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.31s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-8c2d4" [bdc83664-7b80-4d18-b24c-73212080bd8b] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003864961s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-984173 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-984173 addons disable yakd --alsologtostderr -v=1: exit status 11 (259.133387ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:00:28.858469  291844 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:00:28.859185  291844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:28.859201  291844 out.go:374] Setting ErrFile to fd 2...
	I1123 09:00:28.859207  291844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:00:28.859483  291844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:00:28.859794  291844 mustload.go:66] Loading cluster: addons-984173
	I1123 09:00:28.860191  291844 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:28.860208  291844 addons.go:622] checking whether the cluster is paused
	I1123 09:00:28.860321  291844 config.go:182] Loaded profile config "addons-984173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:00:28.860337  291844 host.go:66] Checking if "addons-984173" exists ...
	I1123 09:00:28.860874  291844 cli_runner.go:164] Run: docker container inspect addons-984173 --format={{.State.Status}}
	I1123 09:00:28.878809  291844 ssh_runner.go:195] Run: systemctl --version
	I1123 09:00:28.878866  291844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-984173
	I1123 09:00:28.897335  291844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/addons-984173/id_rsa Username:docker}
	I1123 09:00:29.004992  291844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:00:29.005119  291844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:00:29.037783  291844 cri.go:89] found id: "742ade421fb244b66d8fcfec87fa144fdc7f8738e38cca57ac6ac0bb8fbceba5"
	I1123 09:00:29.037805  291844 cri.go:89] found id: "f6783f9da95524615f3aa651e3af1196eb24de610f8b5966c9f13c754788eeea"
	I1123 09:00:29.037810  291844 cri.go:89] found id: "37d6af059fa8d9a5c10fe2947c3c9208c14a28bda6e706d53ace9352a57d3538"
	I1123 09:00:29.037814  291844 cri.go:89] found id: "66657c8a6cec57d0f3f4516fbacce8c43b7cd7b560ee7e99d4320d4d8ecee0db"
	I1123 09:00:29.037817  291844 cri.go:89] found id: "8586599f3919f69a8d7f1a7d090d598631c698412878d914f2b728fa92c78020"
	I1123 09:00:29.037821  291844 cri.go:89] found id: "6f902ae88d97ebbadeb5af33479296f1cb746c0980deddddd1b09ef5f3bc8365"
	I1123 09:00:29.037824  291844 cri.go:89] found id: "75511f019181b3813cc7d57031fb5c7b720c0760d787d3dc4e3bb9eab9e447b7"
	I1123 09:00:29.037827  291844 cri.go:89] found id: "de8e74b6f79cb01986f0143aa790500273203248c49b24f1e7569ebf6d7eea3b"
	I1123 09:00:29.037830  291844 cri.go:89] found id: "575e9ea051577a331acd367172e11954e99ac78da0892f1ce1556f6e7afc8bd1"
	I1123 09:00:29.037837  291844 cri.go:89] found id: "2b31531176241977a037c34aeb21cc0ee805446cd4582dd8c05f0bba5e5ee203"
	I1123 09:00:29.037840  291844 cri.go:89] found id: "bbd54f91446202b5a64aa6ec4f3f89b8ecf6e43bdac535a131f6367c8cea942c"
	I1123 09:00:29.037843  291844 cri.go:89] found id: "3c3749cfa9b1ed9f5c7d758974e38093080a45ccbe67f9df133d2a234c4d7216"
	I1123 09:00:29.037846  291844 cri.go:89] found id: "f93636a2eb282d8c5338280be50dffa8bd5f5b5cfff2c23a4c28fe0c8c63af6d"
	I1123 09:00:29.037872  291844 cri.go:89] found id: "8f1edccdddb80a5ba7c8da2abcb736527f5b92c08683957cf3031ee2a7946816"
	I1123 09:00:29.037875  291844 cri.go:89] found id: "1559bd52645fb109e782448eda0f021d65b39a587d504ef500408e924dfe9107"
	I1123 09:00:29.037881  291844 cri.go:89] found id: "6c78922b69b65f34bdf813ac38c1b94560127b5c1a5fdc7c0d7b04d6b2bd93da"
	I1123 09:00:29.037886  291844 cri.go:89] found id: "de914953e20a9572875421fe281289c5a617caa68d12164ae74efc0d0f0d5c74"
	I1123 09:00:29.037891  291844 cri.go:89] found id: "87bae25a4298b621346870156435b497671db59c65b473f8aa7fbd44a84b519c"
	I1123 09:00:29.037895  291844 cri.go:89] found id: "529e3e6584de16cd6b6c4611907ac21f74cc0375667cd0d6ff7fd0ec0fe705b8"
	I1123 09:00:29.037898  291844 cri.go:89] found id: "22aab316066d2271588abbdfbf6c5cc1f5d0d9d0c172df0af63395d48da537ca"
	I1123 09:00:29.037905  291844 cri.go:89] found id: "d9e34f2271d2dfc6fd608a7de28303595293cc5d59c0065b12af26164d3a5d26"
	I1123 09:00:29.037918  291844 cri.go:89] found id: "61a76b638e0c8bddc4efefd70150493465f262e04f41e4652540707a8d5d166c"
	I1123 09:00:29.037922  291844 cri.go:89] found id: "126a521cf3c9c0b172dcc407ecbfa8fb34ee99d6ae94a557aa3deaaf1b125a90"
	I1123 09:00:29.037925  291844 cri.go:89] found id: ""
	I1123 09:00:29.037975  291844 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:00:29.052898  291844 out.go:203] 
	W1123 09:00:29.055681  291844 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:00:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:00:29.055714  291844 out.go:285] * 
	* 
	W1123 09:00:29.062070  291844 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:00:29.064854  291844 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-984173 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-605613 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-605613 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-w25h4" [523799cc-4688-4483-80cb-a19fffb1c015] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-605613 -n functional-605613
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-23 09:17:40.302339182 +0000 UTC m=+1221.158321077
functional_test.go:1645: (dbg) Run:  kubectl --context functional-605613 describe po hello-node-connect-7d85dfc575-w25h4 -n default
functional_test.go:1645: (dbg) kubectl --context functional-605613 describe po hello-node-connect-7d85dfc575-w25h4 -n default:
Name:             hello-node-connect-7d85dfc575-w25h4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-605613/192.168.49.2
Start Time:       Sun, 23 Nov 2025 09:07:39 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6d7pg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6d7pg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w25h4 to functional-605613
Normal   Pulling    7m7s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m58s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m58s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-605613 logs hello-node-connect-7d85dfc575-w25h4 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-605613 logs hello-node-connect-7d85dfc575-w25h4 -n default: exit status 1 (108.329884ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-w25h4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-605613 logs hello-node-connect-7d85dfc575-w25h4 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-605613 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-w25h4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-605613/192.168.49.2
Start Time:       Sun, 23 Nov 2025 09:07:39 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6d7pg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6d7pg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w25h4 to functional-605613
Normal   Pulling    7m7s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m58s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m58s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-605613 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-605613 logs -l app=hello-node-connect: exit status 1 (92.154553ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-w25h4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-605613 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-605613 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.72.240
IPs:                      10.96.72.240
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30357/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-605613
helpers_test.go:243: (dbg) docker inspect functional-605613:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "573dde2119f9f0d847928256c9e5f8fe6fcdf58d28d1a69767fb4b798d4070f9",
	        "Created": "2025-11-23T09:04:40.858770976Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300555,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:04:40.921458226Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/573dde2119f9f0d847928256c9e5f8fe6fcdf58d28d1a69767fb4b798d4070f9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/573dde2119f9f0d847928256c9e5f8fe6fcdf58d28d1a69767fb4b798d4070f9/hostname",
	        "HostsPath": "/var/lib/docker/containers/573dde2119f9f0d847928256c9e5f8fe6fcdf58d28d1a69767fb4b798d4070f9/hosts",
	        "LogPath": "/var/lib/docker/containers/573dde2119f9f0d847928256c9e5f8fe6fcdf58d28d1a69767fb4b798d4070f9/573dde2119f9f0d847928256c9e5f8fe6fcdf58d28d1a69767fb4b798d4070f9-json.log",
	        "Name": "/functional-605613",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-605613:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-605613",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "573dde2119f9f0d847928256c9e5f8fe6fcdf58d28d1a69767fb4b798d4070f9",
	                "LowerDir": "/var/lib/docker/overlay2/fd8de0d85f2a859767f7109f18230eb2babe1b20db371dfb878e6651a1358f39-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fd8de0d85f2a859767f7109f18230eb2babe1b20db371dfb878e6651a1358f39/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fd8de0d85f2a859767f7109f18230eb2babe1b20db371dfb878e6651a1358f39/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fd8de0d85f2a859767f7109f18230eb2babe1b20db371dfb878e6651a1358f39/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-605613",
	                "Source": "/var/lib/docker/volumes/functional-605613/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-605613",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-605613",
	                "name.minikube.sigs.k8s.io": "functional-605613",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "264431a839886e7b98a1b3b8367b1b8a6972e1399dc6500bcc3f12591e4d2a34",
	            "SandboxKey": "/var/run/docker/netns/264431a83988",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33156"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33155"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-605613": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:ab:1b:7b:ff:da",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e5df44ac82bbf0ffcb8ead13d1d9d0ecad31ca290022ec62f1c153f398c0b965",
	                    "EndpointID": "dee0bdeb852a3cbc72ffe2aa8b551a1eb3e2e6ef861c25e96484961e0cfbb5d5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-605613",
	                        "573dde2119f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-605613 -n functional-605613
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-605613 logs -n 25: (1.48918936s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-605613 ssh echo hello                                                                                                                          │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ cp      │ functional-605613 cp functional-605613:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1673992194/001/cp-test.txt                                │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ functional-605613 ssh cat /etc/hostname                                                                                                                   │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ functional-605613 ssh -n functional-605613 sudo cat /home/docker/cp-test.txt                                                                              │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ license │                                                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ cp      │ functional-605613 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                 │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ functional-605613 ssh sudo systemctl is-active docker                                                                                                     │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ ssh     │ functional-605613 ssh -n functional-605613 sudo cat /tmp/does/not/exist/cp-test.txt                                                                       │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ functional-605613 ssh sudo systemctl is-active containerd                                                                                                 │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ tunnel  │ functional-605613 tunnel --alsologtostderr                                                                                                                │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ tunnel  │ functional-605613 tunnel --alsologtostderr                                                                                                                │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ tunnel  │ functional-605613 tunnel --alsologtostderr                                                                                                                │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ image   │ functional-605613 image load --daemon kicbase/echo-server:functional-605613 --alsologtostderr                                                             │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ image   │ functional-605613 image ls                                                                                                                                │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ image   │ functional-605613 image load --daemon kicbase/echo-server:functional-605613 --alsologtostderr                                                             │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ image   │ functional-605613 image ls                                                                                                                                │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ image   │ functional-605613 image load --daemon kicbase/echo-server:functional-605613 --alsologtostderr                                                             │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ image   │ functional-605613 image ls                                                                                                                                │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ image   │ functional-605613 image save kicbase/echo-server:functional-605613 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ image   │ functional-605613 image rm kicbase/echo-server:functional-605613 --alsologtostderr                                                                        │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ image   │ functional-605613 image ls                                                                                                                                │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ image   │ functional-605613 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ image   │ functional-605613 image save --daemon kicbase/echo-server:functional-605613 --alsologtostderr                                                             │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ functional-605613 addons list                                                                                                                             │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ functional-605613 addons list -o json                                                                                                                     │ functional-605613 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:06:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:06:42.151722  304877 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:06:42.152006  304877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:06:42.152018  304877 out.go:374] Setting ErrFile to fd 2...
	I1123 09:06:42.152022  304877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:06:42.152437  304877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:06:42.153058  304877 out.go:368] Setting JSON to false
	I1123 09:06:42.154908  304877 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6551,"bootTime":1763882251,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 09:06:42.155007  304877 start.go:143] virtualization:  
	I1123 09:06:42.158747  304877 out.go:179] * [functional-605613] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:06:42.162752  304877 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:06:42.162880  304877 notify.go:221] Checking for updates...
	I1123 09:06:42.168800  304877 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:06:42.171897  304877 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:06:42.174921  304877 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 09:06:42.177999  304877 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:06:42.180884  304877 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:06:42.184500  304877 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:06:42.184678  304877 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:06:42.225045  304877 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:06:42.225160  304877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:06:42.288068  304877 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-23 09:06:42.277644856 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:06:42.288199  304877 docker.go:319] overlay module found
	I1123 09:06:42.293814  304877 out.go:179] * Using the docker driver based on existing profile
	I1123 09:06:42.296678  304877 start.go:309] selected driver: docker
	I1123 09:06:42.296703  304877 start.go:927] validating driver "docker" against &{Name:functional-605613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-605613 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:06:42.296792  304877 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:06:42.296892  304877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:06:42.353383  304877 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-23 09:06:42.343479549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:06:42.353876  304877 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:06:42.353901  304877 cni.go:84] Creating CNI manager for ""
	I1123 09:06:42.353967  304877 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:06:42.354017  304877 start.go:353] cluster config:
	{Name:functional-605613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-605613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:06:42.357253  304877 out.go:179] * Starting "functional-605613" primary control-plane node in "functional-605613" cluster
	I1123 09:06:42.360080  304877 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:06:42.363127  304877 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:06:42.366075  304877 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:06:42.366126  304877 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 09:06:42.366123  304877 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:06:42.366135  304877 cache.go:65] Caching tarball of preloaded images
	I1123 09:06:42.366220  304877 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:06:42.366229  304877 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:06:42.366345  304877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/config.json ...
	I1123 09:06:42.386583  304877 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:06:42.386594  304877 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:06:42.386614  304877 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:06:42.386644  304877 start.go:360] acquireMachinesLock for functional-605613: {Name:mka4ba8d8dc61863087fa1f5a4128d39de846908 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:06:42.386709  304877 start.go:364] duration metric: took 48.674µs to acquireMachinesLock for "functional-605613"
	I1123 09:06:42.386726  304877 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:06:42.386731  304877 fix.go:54] fixHost starting: 
	I1123 09:06:42.386987  304877 cli_runner.go:164] Run: docker container inspect functional-605613 --format={{.State.Status}}
	I1123 09:06:42.422841  304877 fix.go:112] recreateIfNeeded on functional-605613: state=Running err=<nil>
	W1123 09:06:42.422861  304877 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:06:42.426087  304877 out.go:252] * Updating the running docker "functional-605613" container ...
	I1123 09:06:42.426114  304877 machine.go:94] provisionDockerMachine start ...
	I1123 09:06:42.426206  304877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
	I1123 09:06:42.444138  304877 main.go:143] libmachine: Using SSH client type: native
	I1123 09:06:42.444554  304877 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33152 <nil> <nil>}
	I1123 09:06:42.444562  304877 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:06:42.596975  304877 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-605613
	
	I1123 09:06:42.596989  304877 ubuntu.go:182] provisioning hostname "functional-605613"
	I1123 09:06:42.597059  304877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
	I1123 09:06:42.614813  304877 main.go:143] libmachine: Using SSH client type: native
	I1123 09:06:42.615115  304877 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33152 <nil> <nil>}
	I1123 09:06:42.615124  304877 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-605613 && echo "functional-605613" | sudo tee /etc/hostname
	I1123 09:06:42.779532  304877 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-605613
	
	I1123 09:06:42.779616  304877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
	I1123 09:06:42.800293  304877 main.go:143] libmachine: Using SSH client type: native
	I1123 09:06:42.800591  304877 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33152 <nil> <nil>}
	I1123 09:06:42.800604  304877 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-605613' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-605613/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-605613' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:06:42.953651  304877 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:06:42.953666  304877 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:06:42.953686  304877 ubuntu.go:190] setting up certificates
	I1123 09:06:42.953702  304877 provision.go:84] configureAuth start
	I1123 09:06:42.953758  304877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-605613
	I1123 09:06:42.971673  304877 provision.go:143] copyHostCerts
	I1123 09:06:42.971730  304877 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:06:42.971746  304877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:06:42.971834  304877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:06:42.971928  304877 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:06:42.971932  304877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:06:42.971956  304877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:06:42.972003  304877 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:06:42.972006  304877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:06:42.972029  304877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:06:42.972072  304877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.functional-605613 san=[127.0.0.1 192.168.49.2 functional-605613 localhost minikube]
	I1123 09:06:43.045214  304877 provision.go:177] copyRemoteCerts
	I1123 09:06:43.045269  304877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:06:43.045311  304877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
	I1123 09:06:43.062599  304877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/functional-605613/id_rsa Username:docker}
	I1123 09:06:43.169793  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:06:43.187732  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:06:43.205362  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:06:43.223379  304877 provision.go:87] duration metric: took 269.653914ms to configureAuth
	I1123 09:06:43.223397  304877 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:06:43.223607  304877 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:06:43.223702  304877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
	I1123 09:06:43.241163  304877 main.go:143] libmachine: Using SSH client type: native
	I1123 09:06:43.241649  304877 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33152 <nil> <nil>}
	I1123 09:06:43.241663  304877 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:06:48.648244  304877 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:06:48.648258  304877 machine.go:97] duration metric: took 6.222137625s to provisionDockerMachine
	I1123 09:06:48.648268  304877 start.go:293] postStartSetup for "functional-605613" (driver="docker")
	I1123 09:06:48.648278  304877 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:06:48.648334  304877 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:06:48.648388  304877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
	I1123 09:06:48.665827  304877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/functional-605613/id_rsa Username:docker}
	I1123 09:06:48.769376  304877 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:06:48.772759  304877 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:06:48.772778  304877 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:06:48.772792  304877 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:06:48.772847  304877 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:06:48.772930  304877 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:06:48.773005  304877 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/test/nested/copy/284904/hosts -> hosts in /etc/test/nested/copy/284904
	I1123 09:06:48.773049  304877 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/284904
	I1123 09:06:48.780562  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:06:48.798040  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/test/nested/copy/284904/hosts --> /etc/test/nested/copy/284904/hosts (40 bytes)
	I1123 09:06:48.815846  304877 start.go:296] duration metric: took 167.562869ms for postStartSetup
	I1123 09:06:48.815919  304877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:06:48.815958  304877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
	I1123 09:06:48.832566  304877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/functional-605613/id_rsa Username:docker}
	I1123 09:06:48.934697  304877 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:06:48.939518  304877 fix.go:56] duration metric: took 6.552778864s for fixHost
	I1123 09:06:48.939534  304877 start.go:83] releasing machines lock for "functional-605613", held for 6.552817658s
	I1123 09:06:48.939613  304877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-605613
	I1123 09:06:48.956188  304877 ssh_runner.go:195] Run: cat /version.json
	I1123 09:06:48.956234  304877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
	I1123 09:06:48.956484  304877 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:06:48.956558  304877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
	I1123 09:06:48.973552  304877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/functional-605613/id_rsa Username:docker}
	I1123 09:06:48.976220  304877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/functional-605613/id_rsa Username:docker}
	I1123 09:06:49.167400  304877 ssh_runner.go:195] Run: systemctl --version
	I1123 09:06:49.173798  304877 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:06:49.210600  304877 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:06:49.214943  304877 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:06:49.215023  304877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:06:49.222777  304877 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:06:49.222792  304877 start.go:496] detecting cgroup driver to use...
	I1123 09:06:49.222823  304877 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:06:49.222869  304877 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:06:49.238652  304877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:06:49.251877  304877 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:06:49.251928  304877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:06:49.267520  304877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:06:49.280647  304877 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:06:49.419300  304877 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:06:49.550850  304877 docker.go:234] disabling docker service ...
	I1123 09:06:49.550927  304877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:06:49.566687  304877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:06:49.579972  304877 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:06:49.715535  304877 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:06:49.852703  304877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:06:49.866415  304877 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:06:49.880496  304877 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:06:49.880561  304877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:06:49.889901  304877 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:06:49.889972  304877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:06:49.899114  304877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:06:49.907944  304877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:06:49.916848  304877 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:06:49.926058  304877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:06:49.935272  304877 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:06:49.944039  304877 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:06:49.953141  304877 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:06:49.960886  304877 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:06:49.968630  304877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:06:50.114035  304877 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:06:55.537996  304877 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.423937846s)
	I1123 09:06:55.538019  304877 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:06:55.538069  304877 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:06:55.541936  304877 start.go:564] Will wait 60s for crictl version
	I1123 09:06:55.541987  304877 ssh_runner.go:195] Run: which crictl
	I1123 09:06:55.545659  304877 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:06:55.576729  304877 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:06:55.576815  304877 ssh_runner.go:195] Run: crio --version
	I1123 09:06:55.609398  304877 ssh_runner.go:195] Run: crio --version
	I1123 09:06:55.643033  304877 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:06:55.645987  304877 cli_runner.go:164] Run: docker network inspect functional-605613 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:06:55.662498  304877 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:06:55.669841  304877 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1123 09:06:55.672788  304877 kubeadm.go:884] updating cluster {Name:functional-605613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-605613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:06:55.672921  304877 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:06:55.672991  304877 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:06:55.705313  304877 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:06:55.705325  304877 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:06:55.705382  304877 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:06:55.731458  304877 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:06:55.731469  304877 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:06:55.731475  304877 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1123 09:06:55.731581  304877 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-605613 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-605613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:06:55.731659  304877 ssh_runner.go:195] Run: crio config
	I1123 09:06:55.801652  304877 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1123 09:06:55.801681  304877 cni.go:84] Creating CNI manager for ""
	I1123 09:06:55.801690  304877 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:06:55.801705  304877 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:06:55.801727  304877 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-605613 NodeName:functional-605613 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:06:55.801851  304877 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-605613"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:06:55.801917  304877 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:06:55.809697  304877 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:06:55.809760  304877 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:06:55.817381  304877 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 09:06:55.829842  304877 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:06:55.843087  304877 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1123 09:06:55.856088  304877 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:06:55.859834  304877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:06:55.994338  304877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:06:56.010369  304877 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613 for IP: 192.168.49.2
	I1123 09:06:56.010381  304877 certs.go:195] generating shared ca certs ...
	I1123 09:06:56.010397  304877 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:06:56.010552  304877 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:06:56.010595  304877 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:06:56.010601  304877 certs.go:257] generating profile certs ...
	I1123 09:06:56.010686  304877 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.key
	I1123 09:06:56.010735  304877 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/apiserver.key.a1ad393a
	I1123 09:06:56.010772  304877 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/proxy-client.key
	I1123 09:06:56.010885  304877 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:06:56.010915  304877 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:06:56.010922  304877 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:06:56.010949  304877 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:06:56.010970  304877 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:06:56.010991  304877 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:06:56.011041  304877 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:06:56.011666  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:06:56.032003  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:06:56.052371  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:06:56.071453  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:06:56.089322  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 09:06:56.107166  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:06:56.124373  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:06:56.142664  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:06:56.160417  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:06:56.178819  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:06:56.196183  304877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:06:56.212876  304877 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:06:56.225496  304877 ssh_runner.go:195] Run: openssl version
	I1123 09:06:56.231865  304877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:06:56.240179  304877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:06:56.244136  304877 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:06:56.244193  304877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:06:56.285078  304877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:06:56.292865  304877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:06:56.300975  304877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:06:56.304613  304877 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:06:56.304673  304877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:06:56.345591  304877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:06:56.353511  304877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:06:56.362071  304877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:06:56.365997  304877 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:06:56.366050  304877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:06:56.406813  304877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:06:56.414745  304877 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:06:56.418493  304877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:06:56.459541  304877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:06:56.500561  304877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:06:56.541885  304877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:06:56.582633  304877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:06:56.630056  304877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:06:56.690759  304877 kubeadm.go:401] StartCluster: {Name:functional-605613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-605613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:06:56.690855  304877 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:06:56.690917  304877 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:06:56.722327  304877 cri.go:89] found id: "f6dd7a6da2ab2fe2b86670495ab54128cb9c1833f7adba040252cb2bd7f41d92"
	I1123 09:06:56.722339  304877 cri.go:89] found id: "50fab0b70d3fd58d6dd41b0f8f89fef22e46d3f1a454dacb862916b1a3a01dd8"
	I1123 09:06:56.722343  304877 cri.go:89] found id: "e9abbe2f8f9e456eecb12b5b245336d535d9c39c542e2520f6b3a886b7859ddf"
	I1123 09:06:56.722346  304877 cri.go:89] found id: "27fdf0922c7b84fa5200180b391ccf11f58de52f71292a4c51318e1ba8d0b407"
	I1123 09:06:56.722349  304877 cri.go:89] found id: "befdc9676c305a763eb48fde7c5fbf9836fa59498658923536cfeec681d9b2ef"
	I1123 09:06:56.722352  304877 cri.go:89] found id: "e5acddeec0ecbdfae429f463134a50169cc719304fa4da536c75c8c0757aec66"
	I1123 09:06:56.722354  304877 cri.go:89] found id: "236246867f43481ea2007e9b5d11fda36e46915de7b753584c931d07b225edca"
	I1123 09:06:56.722357  304877 cri.go:89] found id: "1731721c79a8d80660bdc38f510aaf9242c382f500f90f5e270d6b65b006b0d8"
	I1123 09:06:56.722359  304877 cri.go:89] found id: "6cd86c70c3b026a486117d1f3a8bb9ad4aee3e5a18abc980885b28cabe4b7645"
	I1123 09:06:56.722364  304877 cri.go:89] found id: "66dd0c67914b7ca1fa7ef05e57ae56997a761977b5532573a820379cb530d0a4"
	I1123 09:06:56.722366  304877 cri.go:89] found id: ""
	I1123 09:06:56.722425  304877 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:06:56.734879  304877 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:06:56Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:06:56.734934  304877 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:06:56.743543  304877 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:06:56.743553  304877 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:06:56.743614  304877 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:06:56.751294  304877 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:06:56.751802  304877 kubeconfig.go:125] found "functional-605613" server: "https://192.168.49.2:8441"
	I1123 09:06:56.753150  304877 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:06:56.762969  304877 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-23 09:04:49.281816382 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-23 09:06:55.851283151 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1123 09:06:56.762979  304877 kubeadm.go:1161] stopping kube-system containers ...
	I1123 09:06:56.762990  304877 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1123 09:06:56.763046  304877 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:06:56.791680  304877 cri.go:89] found id: "f6dd7a6da2ab2fe2b86670495ab54128cb9c1833f7adba040252cb2bd7f41d92"
	I1123 09:06:56.791692  304877 cri.go:89] found id: "50fab0b70d3fd58d6dd41b0f8f89fef22e46d3f1a454dacb862916b1a3a01dd8"
	I1123 09:06:56.791695  304877 cri.go:89] found id: "e9abbe2f8f9e456eecb12b5b245336d535d9c39c542e2520f6b3a886b7859ddf"
	I1123 09:06:56.791697  304877 cri.go:89] found id: "27fdf0922c7b84fa5200180b391ccf11f58de52f71292a4c51318e1ba8d0b407"
	I1123 09:06:56.791699  304877 cri.go:89] found id: "befdc9676c305a763eb48fde7c5fbf9836fa59498658923536cfeec681d9b2ef"
	I1123 09:06:56.791702  304877 cri.go:89] found id: "e5acddeec0ecbdfae429f463134a50169cc719304fa4da536c75c8c0757aec66"
	I1123 09:06:56.791704  304877 cri.go:89] found id: "236246867f43481ea2007e9b5d11fda36e46915de7b753584c931d07b225edca"
	I1123 09:06:56.791706  304877 cri.go:89] found id: "1731721c79a8d80660bdc38f510aaf9242c382f500f90f5e270d6b65b006b0d8"
	I1123 09:06:56.791708  304877 cri.go:89] found id: ""
	I1123 09:06:56.791712  304877 cri.go:252] Stopping containers: [f6dd7a6da2ab2fe2b86670495ab54128cb9c1833f7adba040252cb2bd7f41d92 50fab0b70d3fd58d6dd41b0f8f89fef22e46d3f1a454dacb862916b1a3a01dd8 e9abbe2f8f9e456eecb12b5b245336d535d9c39c542e2520f6b3a886b7859ddf 27fdf0922c7b84fa5200180b391ccf11f58de52f71292a4c51318e1ba8d0b407 befdc9676c305a763eb48fde7c5fbf9836fa59498658923536cfeec681d9b2ef e5acddeec0ecbdfae429f463134a50169cc719304fa4da536c75c8c0757aec66 236246867f43481ea2007e9b5d11fda36e46915de7b753584c931d07b225edca 1731721c79a8d80660bdc38f510aaf9242c382f500f90f5e270d6b65b006b0d8]
	I1123 09:06:56.791779  304877 ssh_runner.go:195] Run: which crictl
	I1123 09:06:56.795471  304877 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 f6dd7a6da2ab2fe2b86670495ab54128cb9c1833f7adba040252cb2bd7f41d92 50fab0b70d3fd58d6dd41b0f8f89fef22e46d3f1a454dacb862916b1a3a01dd8 e9abbe2f8f9e456eecb12b5b245336d535d9c39c542e2520f6b3a886b7859ddf 27fdf0922c7b84fa5200180b391ccf11f58de52f71292a4c51318e1ba8d0b407 befdc9676c305a763eb48fde7c5fbf9836fa59498658923536cfeec681d9b2ef e5acddeec0ecbdfae429f463134a50169cc719304fa4da536c75c8c0757aec66 236246867f43481ea2007e9b5d11fda36e46915de7b753584c931d07b225edca 1731721c79a8d80660bdc38f510aaf9242c382f500f90f5e270d6b65b006b0d8
	I1123 09:06:56.858132  304877 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1123 09:06:56.976287  304877 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:06:56.984127  304877 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 23 09:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 23 09:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 23 09:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov 23 09:04 /etc/kubernetes/scheduler.conf
	
	I1123 09:06:56.984184  304877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1123 09:06:56.992205  304877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1123 09:06:57.001033  304877 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:06:57.001119  304877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:06:57.009570  304877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1123 09:06:57.017379  304877 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:06:57.017482  304877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:06:57.025265  304877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1123 09:06:57.033505  304877 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:06:57.033566  304877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:06:57.041158  304877 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:06:57.049202  304877 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 09:06:57.096327  304877 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 09:06:59.565697  304877 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.469344618s)
	I1123 09:06:59.565767  304877 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1123 09:06:59.780980  304877 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 09:06:59.847052  304877 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1123 09:06:59.917564  304877 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:06:59.917635  304877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:07:00.418771  304877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:07:00.918712  304877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:07:00.935312  304877 api_server.go:72] duration metric: took 1.017747106s to wait for apiserver process to appear ...
	I1123 09:07:00.935328  304877 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:07:00.935346  304877 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 09:07:04.058003  304877 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:07:04.058023  304877 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:07:04.058038  304877 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 09:07:04.171727  304877 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:07:04.171743  304877 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:07:04.436052  304877 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 09:07:04.444535  304877 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:07:04.444551  304877 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:07:04.936147  304877 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 09:07:05.036028  304877 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:07:05.036049  304877 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:07:05.435464  304877 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 09:07:05.450404  304877 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:07:05.450423  304877 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:07:05.935715  304877 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 09:07:05.946310  304877 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1123 09:07:05.960042  304877 api_server.go:141] control plane version: v1.34.1
	I1123 09:07:05.960061  304877 api_server.go:131] duration metric: took 5.024727096s to wait for apiserver health ...
	I1123 09:07:05.960069  304877 cni.go:84] Creating CNI manager for ""
	I1123 09:07:05.960074  304877 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:07:05.963992  304877 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:07:05.966893  304877 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:07:05.971179  304877 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:07:05.971191  304877 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:07:05.985177  304877 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:07:06.527206  304877 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:07:06.530584  304877 system_pods.go:59] 8 kube-system pods found
	I1123 09:07:06.530615  304877 system_pods.go:61] "coredns-66bc5c9577-6fdcr" [1ec570f3-fcac-4284-90cc-7ed33de0eb96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:06.530622  304877 system_pods.go:61] "etcd-functional-605613" [0cd6920b-7c7e-4bec-abfb-5041decac7f0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:07:06.530627  304877 system_pods.go:61] "kindnet-6q654" [2e7c0b44-3086-408c-a011-c0f13da7b79f] Running
	I1123 09:07:06.530632  304877 system_pods.go:61] "kube-apiserver-functional-605613" [ea24ff68-59e0-4675-b7ac-d6e3908023f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:07:06.530638  304877 system_pods.go:61] "kube-controller-manager-functional-605613" [1d8157c0-4df4-41c7-b9ba-ad76bc74063a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:07:06.530642  304877 system_pods.go:61] "kube-proxy-4krg7" [44d3b3a5-1037-4b1d-ba22-784ea9f75de0] Running
	I1123 09:07:06.530648  304877 system_pods.go:61] "kube-scheduler-functional-605613" [3e293320-fb37-4cc6-8b32-6aa9053ca9e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:07:06.530651  304877 system_pods.go:61] "storage-provisioner" [46d9d073-fcf0-4a27-bc1e-bad777af7399] Running
	I1123 09:07:06.530656  304877 system_pods.go:74] duration metric: took 3.440687ms to wait for pod list to return data ...
	I1123 09:07:06.530663  304877 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:07:06.533828  304877 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:07:06.533850  304877 node_conditions.go:123] node cpu capacity is 2
	I1123 09:07:06.533861  304877 node_conditions.go:105] duration metric: took 3.194243ms to run NodePressure ...
	I1123 09:07:06.533922  304877 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1123 09:07:06.790544  304877 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1123 09:07:06.798987  304877 kubeadm.go:744] kubelet initialised
	I1123 09:07:06.798998  304877 kubeadm.go:745] duration metric: took 8.441336ms waiting for restarted kubelet to initialise ...
	I1123 09:07:06.799013  304877 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:07:06.810105  304877 ops.go:34] apiserver oom_adj: -16
	I1123 09:07:06.810116  304877 kubeadm.go:602] duration metric: took 10.066557841s to restartPrimaryControlPlane
	I1123 09:07:06.810124  304877 kubeadm.go:403] duration metric: took 10.119375136s to StartCluster
	I1123 09:07:06.810153  304877 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:06.810213  304877 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:07:06.810953  304877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:06.811180  304877 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:07:06.811553  304877 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:07:06.811617  304877 addons.go:70] Setting storage-provisioner=true in profile "functional-605613"
	I1123 09:07:06.811634  304877 addons.go:239] Setting addon storage-provisioner=true in "functional-605613"
	W1123 09:07:06.811639  304877 addons.go:248] addon storage-provisioner should already be in state true
	I1123 09:07:06.811660  304877 host.go:66] Checking if "functional-605613" exists ...
	I1123 09:07:06.812302  304877 cli_runner.go:164] Run: docker container inspect functional-605613 --format={{.State.Status}}
	I1123 09:07:06.812523  304877 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:06.812599  304877 addons.go:70] Setting default-storageclass=true in profile "functional-605613"
	I1123 09:07:06.812610  304877 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-605613"
	I1123 09:07:06.812960  304877 cli_runner.go:164] Run: docker container inspect functional-605613 --format={{.State.Status}}
	I1123 09:07:06.815703  304877 out.go:179] * Verifying Kubernetes components...
	I1123 09:07:06.819391  304877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:07:06.847697  304877 addons.go:239] Setting addon default-storageclass=true in "functional-605613"
	W1123 09:07:06.847708  304877 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:07:06.847731  304877 host.go:66] Checking if "functional-605613" exists ...
	I1123 09:07:06.848159  304877 cli_runner.go:164] Run: docker container inspect functional-605613 --format={{.State.Status}}
	I1123 09:07:06.854717  304877 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:07:06.857818  304877 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:07:06.857832  304877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:07:06.857897  304877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
	I1123 09:07:06.884318  304877 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:07:06.884330  304877 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:07:06.884389  304877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
	I1123 09:07:06.912465  304877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/functional-605613/id_rsa Username:docker}
	I1123 09:07:06.922638  304877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/functional-605613/id_rsa Username:docker}
	I1123 09:07:07.096580  304877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:07:07.097623  304877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:07:07.122378  304877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:07:07.986946  304877 node_ready.go:35] waiting up to 6m0s for node "functional-605613" to be "Ready" ...
	I1123 09:07:07.989911  304877 node_ready.go:49] node "functional-605613" is "Ready"
	I1123 09:07:07.989926  304877 node_ready.go:38] duration metric: took 2.962832ms for node "functional-605613" to be "Ready" ...
	I1123 09:07:07.989937  304877 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:07:07.989992  304877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:07:08.007953  304877 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 09:07:08.008562  304877 api_server.go:72] duration metric: took 1.197357857s to wait for apiserver process to appear ...
	I1123 09:07:08.008591  304877 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:07:08.008610  304877 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1123 09:07:08.011034  304877 addons.go:530] duration metric: took 1.199475855s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 09:07:08.018528  304877 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1123 09:07:08.019678  304877 api_server.go:141] control plane version: v1.34.1
	I1123 09:07:08.019695  304877 api_server.go:131] duration metric: took 11.098244ms to wait for apiserver health ...
	I1123 09:07:08.019703  304877 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:07:08.023637  304877 system_pods.go:59] 8 kube-system pods found
	I1123 09:07:08.023657  304877 system_pods.go:61] "coredns-66bc5c9577-6fdcr" [1ec570f3-fcac-4284-90cc-7ed33de0eb96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:08.023665  304877 system_pods.go:61] "etcd-functional-605613" [0cd6920b-7c7e-4bec-abfb-5041decac7f0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:07:08.023669  304877 system_pods.go:61] "kindnet-6q654" [2e7c0b44-3086-408c-a011-c0f13da7b79f] Running
	I1123 09:07:08.023675  304877 system_pods.go:61] "kube-apiserver-functional-605613" [ea24ff68-59e0-4675-b7ac-d6e3908023f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:07:08.023680  304877 system_pods.go:61] "kube-controller-manager-functional-605613" [1d8157c0-4df4-41c7-b9ba-ad76bc74063a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:07:08.023683  304877 system_pods.go:61] "kube-proxy-4krg7" [44d3b3a5-1037-4b1d-ba22-784ea9f75de0] Running
	I1123 09:07:08.023688  304877 system_pods.go:61] "kube-scheduler-functional-605613" [3e293320-fb37-4cc6-8b32-6aa9053ca9e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:07:08.023690  304877 system_pods.go:61] "storage-provisioner" [46d9d073-fcf0-4a27-bc1e-bad777af7399] Running
	I1123 09:07:08.023695  304877 system_pods.go:74] duration metric: took 3.987918ms to wait for pod list to return data ...
	I1123 09:07:08.023702  304877 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:07:08.026662  304877 default_sa.go:45] found service account: "default"
	I1123 09:07:08.026684  304877 default_sa.go:55] duration metric: took 2.975829ms for default service account to be created ...
	I1123 09:07:08.026696  304877 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:07:08.029869  304877 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:08.029888  304877 system_pods.go:89] "coredns-66bc5c9577-6fdcr" [1ec570f3-fcac-4284-90cc-7ed33de0eb96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:08.029896  304877 system_pods.go:89] "etcd-functional-605613" [0cd6920b-7c7e-4bec-abfb-5041decac7f0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:07:08.029901  304877 system_pods.go:89] "kindnet-6q654" [2e7c0b44-3086-408c-a011-c0f13da7b79f] Running
	I1123 09:07:08.029907  304877 system_pods.go:89] "kube-apiserver-functional-605613" [ea24ff68-59e0-4675-b7ac-d6e3908023f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:07:08.029913  304877 system_pods.go:89] "kube-controller-manager-functional-605613" [1d8157c0-4df4-41c7-b9ba-ad76bc74063a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:07:08.029916  304877 system_pods.go:89] "kube-proxy-4krg7" [44d3b3a5-1037-4b1d-ba22-784ea9f75de0] Running
	I1123 09:07:08.029924  304877 system_pods.go:89] "kube-scheduler-functional-605613" [3e293320-fb37-4cc6-8b32-6aa9053ca9e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:07:08.029927  304877 system_pods.go:89] "storage-provisioner" [46d9d073-fcf0-4a27-bc1e-bad777af7399] Running
	I1123 09:07:08.029934  304877 system_pods.go:126] duration metric: took 3.232694ms to wait for k8s-apps to be running ...
	I1123 09:07:08.029940  304877 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:07:08.029997  304877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:07:08.044884  304877 system_svc.go:56] duration metric: took 14.932782ms WaitForService to wait for kubelet
	I1123 09:07:08.044903  304877 kubeadm.go:587] duration metric: took 1.233702741s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:07:08.044920  304877 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:07:08.048300  304877 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:07:08.048316  304877 node_conditions.go:123] node cpu capacity is 2
	I1123 09:07:08.048327  304877 node_conditions.go:105] duration metric: took 3.402903ms to run NodePressure ...
	I1123 09:07:08.048340  304877 start.go:242] waiting for startup goroutines ...
	I1123 09:07:08.048346  304877 start.go:247] waiting for cluster config update ...
	I1123 09:07:08.048356  304877 start.go:256] writing updated cluster config ...
	I1123 09:07:08.048687  304877 ssh_runner.go:195] Run: rm -f paused
	I1123 09:07:08.052715  304877 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:07:08.056392  304877 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6fdcr" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:07:10.062859  304877 pod_ready.go:104] pod "coredns-66bc5c9577-6fdcr" is not "Ready", error: <nil>
	I1123 09:07:11.062808  304877 pod_ready.go:94] pod "coredns-66bc5c9577-6fdcr" is "Ready"
	I1123 09:07:11.062823  304877 pod_ready.go:86] duration metric: took 3.006416676s for pod "coredns-66bc5c9577-6fdcr" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:11.065971  304877 pod_ready.go:83] waiting for pod "etcd-functional-605613" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:07:13.071819  304877 pod_ready.go:104] pod "etcd-functional-605613" is not "Ready", error: <nil>
	W1123 09:07:15.571106  304877 pod_ready.go:104] pod "etcd-functional-605613" is not "Ready", error: <nil>
	W1123 09:07:17.571698  304877 pod_ready.go:104] pod "etcd-functional-605613" is not "Ready", error: <nil>
	I1123 09:07:19.071508  304877 pod_ready.go:94] pod "etcd-functional-605613" is "Ready"
	I1123 09:07:19.071521  304877 pod_ready.go:86] duration metric: took 8.00553719s for pod "etcd-functional-605613" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:19.073897  304877 pod_ready.go:83] waiting for pod "kube-apiserver-functional-605613" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:19.078510  304877 pod_ready.go:94] pod "kube-apiserver-functional-605613" is "Ready"
	I1123 09:07:19.078523  304877 pod_ready.go:86] duration metric: took 4.614322ms for pod "kube-apiserver-functional-605613" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:19.080832  304877 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-605613" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:19.085344  304877 pod_ready.go:94] pod "kube-controller-manager-functional-605613" is "Ready"
	I1123 09:07:19.085358  304877 pod_ready.go:86] duration metric: took 4.513085ms for pod "kube-controller-manager-functional-605613" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:19.089857  304877 pod_ready.go:83] waiting for pod "kube-proxy-4krg7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:19.269489  304877 pod_ready.go:94] pod "kube-proxy-4krg7" is "Ready"
	I1123 09:07:19.269503  304877 pod_ready.go:86] duration metric: took 179.633398ms for pod "kube-proxy-4krg7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:19.470000  304877 pod_ready.go:83] waiting for pod "kube-scheduler-functional-605613" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:19.869869  304877 pod_ready.go:94] pod "kube-scheduler-functional-605613" is "Ready"
	I1123 09:07:19.869884  304877 pod_ready.go:86] duration metric: took 399.871298ms for pod "kube-scheduler-functional-605613" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:19.869895  304877 pod_ready.go:40] duration metric: took 11.817156266s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:07:19.921765  304877 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:07:19.926774  304877 out.go:179] * Done! kubectl is now configured to use "functional-605613" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:07:59 functional-605613 crio[3702]: time="2025-11-23T09:07:59.896303709Z" level=info msg="Stopping pod sandbox: ae798199299af7b8ff36403c661a96bfc8781bca66d8a8e9aa82d79fdc538bcd" id=82bdac92-70ba-4605-8a93-572b4538a86b name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 09:07:59 functional-605613 crio[3702]: time="2025-11-23T09:07:59.896361342Z" level=info msg="Stopped pod sandbox (already stopped): ae798199299af7b8ff36403c661a96bfc8781bca66d8a8e9aa82d79fdc538bcd" id=82bdac92-70ba-4605-8a93-572b4538a86b name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 09:07:59 functional-605613 crio[3702]: time="2025-11-23T09:07:59.896833347Z" level=info msg="Removing pod sandbox: ae798199299af7b8ff36403c661a96bfc8781bca66d8a8e9aa82d79fdc538bcd" id=3f9f9d5d-f4c8-4121-a6d9-66fe758435ad name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 09:07:59 functional-605613 crio[3702]: time="2025-11-23T09:07:59.900413647Z" level=info msg="Removed pod sandbox: ae798199299af7b8ff36403c661a96bfc8781bca66d8a8e9aa82d79fdc538bcd" id=3f9f9d5d-f4c8-4121-a6d9-66fe758435ad name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 09:07:59 functional-605613 crio[3702]: time="2025-11-23T09:07:59.902224843Z" level=info msg="Stopping pod sandbox: 992b08402bda352895a77f4b4c32d83900d2f0259bf36c06fda09721df9346ba" id=ec5bf116-5def-4982-ba3d-8d10caf6b370 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 09:07:59 functional-605613 crio[3702]: time="2025-11-23T09:07:59.902391704Z" level=info msg="Stopped pod sandbox (already stopped): 992b08402bda352895a77f4b4c32d83900d2f0259bf36c06fda09721df9346ba" id=ec5bf116-5def-4982-ba3d-8d10caf6b370 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 09:07:59 functional-605613 crio[3702]: time="2025-11-23T09:07:59.902754071Z" level=info msg="Removing pod sandbox: 992b08402bda352895a77f4b4c32d83900d2f0259bf36c06fda09721df9346ba" id=7e426c1c-2960-4106-bb8a-d36c8cfb7d81 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 09:07:59 functional-605613 crio[3702]: time="2025-11-23T09:07:59.908019026Z" level=info msg="Removed pod sandbox: 992b08402bda352895a77f4b4c32d83900d2f0259bf36c06fda09721df9346ba" id=7e426c1c-2960-4106-bb8a-d36c8cfb7d81 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 09:08:00 functional-605613 crio[3702]: time="2025-11-23T09:08:00.980447578Z" level=info msg="Running pod sandbox: default/hello-node-75c85bcc94-h8vk4/POD" id=0e362ddc-8de7-444b-a6d0-a09d99d70191 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:08:00 functional-605613 crio[3702]: time="2025-11-23T09:08:00.980510807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:00 functional-605613 crio[3702]: time="2025-11-23T09:08:00.990158671Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-h8vk4 Namespace:default ID:b38068599801aeca5305161ecf30d776717a0a1cca15bdd687a1410d0cb20d4e UID:b33fadcd-6473-49f8-bfb8-18676c04a3aa NetNS:/var/run/netns/f07da3c6-2085-4e1a-84a2-07857e677ca3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049f6f0}] Aliases:map[]}"
	Nov 23 09:08:00 functional-605613 crio[3702]: time="2025-11-23T09:08:00.990198999Z" level=info msg="Adding pod default_hello-node-75c85bcc94-h8vk4 to CNI network \"kindnet\" (type=ptp)"
	Nov 23 09:08:01 functional-605613 crio[3702]: time="2025-11-23T09:08:01.00251146Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-h8vk4 Namespace:default ID:b38068599801aeca5305161ecf30d776717a0a1cca15bdd687a1410d0cb20d4e UID:b33fadcd-6473-49f8-bfb8-18676c04a3aa NetNS:/var/run/netns/f07da3c6-2085-4e1a-84a2-07857e677ca3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049f6f0}] Aliases:map[]}"
	Nov 23 09:08:01 functional-605613 crio[3702]: time="2025-11-23T09:08:01.002677271Z" level=info msg="Checking pod default_hello-node-75c85bcc94-h8vk4 for CNI network kindnet (type=ptp)"
	Nov 23 09:08:01 functional-605613 crio[3702]: time="2025-11-23T09:08:01.0062359Z" level=info msg="Ran pod sandbox b38068599801aeca5305161ecf30d776717a0a1cca15bdd687a1410d0cb20d4e with infra container: default/hello-node-75c85bcc94-h8vk4/POD" id=0e362ddc-8de7-444b-a6d0-a09d99d70191 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:08:01 functional-605613 crio[3702]: time="2025-11-23T09:08:01.009052203Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=79c3d1e8-4326-407c-8b39-0a27ec408bb1 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:08:16 functional-605613 crio[3702]: time="2025-11-23T09:08:16.936336197Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=456bf424-b83a-4a0c-b464-22f7e10a19fc name=/runtime.v1.ImageService/PullImage
	Nov 23 09:08:18 functional-605613 crio[3702]: time="2025-11-23T09:08:18.936903619Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=99a94b52-a913-418a-9f20-e1e3f09c8089 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:08:46 functional-605613 crio[3702]: time="2025-11-23T09:08:46.937779351Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=78bc03b8-74a2-4ffd-9816-3a55723bbf34 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:09:06 functional-605613 crio[3702]: time="2025-11-23T09:09:06.937057127Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1f2d45cf-872b-4916-9c95-c6aee4b82787 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:09:34 functional-605613 crio[3702]: time="2025-11-23T09:09:34.936948456Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=46668d48-637b-4f6f-a861-31dad2832979 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:10:33 functional-605613 crio[3702]: time="2025-11-23T09:10:33.938128602Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5f3e4854-b4e8-4934-99f0-753dedc5173d name=/runtime.v1.ImageService/PullImage
	Nov 23 09:10:55 functional-605613 crio[3702]: time="2025-11-23T09:10:55.937234454Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=dbf33b1c-a0fd-4630-b48c-83625e5b8516 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:13:20 functional-605613 crio[3702]: time="2025-11-23T09:13:20.937187511Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f434d6d0-5247-42af-a660-e3ecf6bd2ffb name=/runtime.v1.ImageService/PullImage
	Nov 23 09:13:47 functional-605613 crio[3702]: time="2025-11-23T09:13:47.936642632Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=55684315-16f9-4d38-96a4-afdc884d385f name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ac7b581f6d5a1       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712   9 minutes ago       Running             myfrontend                0                   ad64635d58a9a       sp-pod                                      default
	948e27fad8012       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   87381e5d7a34d       nginx-svc                                   default
	321cbcfa29347       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   3                   9a4c2608a51b4       coredns-66bc5c9577-6fdcr                    kube-system
	2e2129201be39       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   027af25e2a0df       kube-proxy-4krg7                            kube-system
	0cb7895618b03       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   a309df809d2e2       kindnet-6q654                               kube-system
	a37b4586e953a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   1b965f4fb3505       storage-provisioner                         kube-system
	97d4ca7424166       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   2aa8676487bbf       kube-apiserver-functional-605613            kube-system
	575090a182cad       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   a026f9f4f189e       kube-scheduler-functional-605613            kube-system
	98bb6a14eb6e6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   d8651f3a85ad2       kube-controller-manager-functional-605613   kube-system
	cfce6081d330a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   4c06ac8d4398a       etcd-functional-605613                      kube-system
	f6dd7a6da2ab2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   2                   9a4c2608a51b4       coredns-66bc5c9577-6fdcr                    kube-system
	50fab0b70d3fd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       2                   1b965f4fb3505       storage-provisioner                         kube-system
	27fdf0922c7b8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                2                   027af25e2a0df       kube-proxy-4krg7                            kube-system
	befdc9676c305       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      2                   4c06ac8d4398a       etcd-functional-605613                      kube-system
	e5acddeec0ecb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               2                   a309df809d2e2       kindnet-6q654                               kube-system
	236246867f434       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   2                   d8651f3a85ad2       kube-controller-manager-functional-605613   kube-system
	1731721c79a8d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            2                   a026f9f4f189e       kube-scheduler-functional-605613            kube-system
	
	
	==> coredns [321cbcfa2934700d2a9d88506c5727bf6ec20b8b8475869a8c1e44b711480bbf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35615 - 32269 "HINFO IN 1323888299270688746.1772346169746699857. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036511909s
	
	
	==> coredns [f6dd7a6da2ab2fe2b86670495ab54128cb9c1833f7adba040252cb2bd7f41d92] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44525 - 46877 "HINFO IN 4600731125406374154.8786516996341667974. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037309993s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-605613
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-605613
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=functional-605613
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_05_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:05:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-605613
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:17:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:17:35 +0000   Sun, 23 Nov 2025 09:04:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:17:35 +0000   Sun, 23 Nov 2025 09:04:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:17:35 +0000   Sun, 23 Nov 2025 09:04:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:17:35 +0000   Sun, 23 Nov 2025 09:05:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-605613
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                feea811c-ac1f-4aff-93fc-eba52281bc43
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-h8vk4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	  default                     hello-node-connect-7d85dfc575-w25h4          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  kube-system                 coredns-66bc5c9577-6fdcr                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-605613                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-6q654                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-605613             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-605613    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4krg7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-605613             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-605613 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-605613 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-605613 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-605613 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-605613 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-605613 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-605613 event: Registered Node functional-605613 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-605613 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-605613 event: Registered Node functional-605613 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-605613 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-605613 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-605613 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-605613 event: Registered Node functional-605613 in Controller
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015154] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511595] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034200] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753844] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.833249] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:37] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	[Nov23 08:56] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	[  +0.083595] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov23 09:04] overlayfs: idmapped layers are currently not supported
	[ +53.074501] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [befdc9676c305a763eb48fde7c5fbf9836fa59498658923536cfeec681d9b2ef] <==
	{"level":"warn","ts":"2025-11-23T09:06:17.774421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:17.790282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:17.808690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:17.842228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:17.853849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:17.871019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:17.942606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49854","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T09:06:43.419871Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-23T09:06:43.419941Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-605613","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-23T09:06:43.420079Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T09:06:43.566603Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-11-23T09:06:43.566767Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T09:06:43.566808Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T09:06:43.566817Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-23T09:06:43.566779Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-23T09:06:43.566894Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T09:06:43.566910Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T09:06:43.566918Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T09:06:43.566901Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-23T09:06:43.567030Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-23T09:06:43.567067Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-23T09:06:43.570886Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-23T09:06:43.570995Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T09:06:43.571114Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-23T09:06:43.571165Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-605613","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [cfce6081d330a44378e27f030c6c4c555c5a928188dae99f5cbbe5b10248427f] <==
	{"level":"warn","ts":"2025-11-23T09:07:02.753035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:02.766148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:02.784024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:02.807065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:02.824987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:02.837236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:02.859408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:02.876221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:02.890019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:02.910273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:02.924059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:02.947331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:02.965371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:02.977970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:03.010610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:03.051854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:03.070279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:03.092583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:03.123057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:03.137709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:03.156627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:03.214778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38476","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T09:17:01.962456Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1161}
	{"level":"info","ts":"2025-11-23T09:17:01.991745Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1161,"took":"28.717474ms","hash":989039004,"current-db-size-bytes":3383296,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1503232,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-23T09:17:01.991819Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":989039004,"revision":1161,"compact-revision":-1}
	
	
	==> kernel <==
	 09:17:42 up  2:00,  0 user,  load average: 0.12, 0.36, 1.40
	Linux functional-605613 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0cb7895618b03c96ac5bd1a1f15e8267fd04c8dba6824ce6d470967dafad4f1c] <==
	I1123 09:15:35.577539       1 main.go:301] handling current node
	I1123 09:15:45.580360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:15:45.580408       1 main.go:301] handling current node
	I1123 09:15:55.577490       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:15:55.577521       1 main.go:301] handling current node
	I1123 09:16:05.576756       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:16:05.576907       1 main.go:301] handling current node
	I1123 09:16:15.579897       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:16:15.579930       1 main.go:301] handling current node
	I1123 09:16:25.585454       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:16:25.585489       1 main.go:301] handling current node
	I1123 09:16:35.577471       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:16:35.577502       1 main.go:301] handling current node
	I1123 09:16:45.579678       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:16:45.579784       1 main.go:301] handling current node
	I1123 09:16:55.578038       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:16:55.578163       1 main.go:301] handling current node
	I1123 09:17:05.578049       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:17:05.578169       1 main.go:301] handling current node
	I1123 09:17:15.576594       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:17:15.576710       1 main.go:301] handling current node
	I1123 09:17:25.578068       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:17:25.578104       1 main.go:301] handling current node
	I1123 09:17:35.578352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:17:35.578488       1 main.go:301] handling current node
	
	
	==> kindnet [e5acddeec0ecbdfae429f463134a50169cc719304fa4da536c75c8c0757aec66] <==
	I1123 09:06:13.666072       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:06:13.666262       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1123 09:06:13.666393       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:06:13.666404       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:06:13.666417       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:06:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:06:13.882725       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:06:13.882818       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:06:13.882851       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:06:13.883226       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:06:13.883439       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 09:06:13.883586       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 09:06:13.883944       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 09:06:13.890295       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1123 09:06:18.883600       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:06:18.883644       1 metrics.go:72] Registering metrics
	I1123 09:06:18.883701       1 controller.go:711] "Syncing nftables rules"
	I1123 09:06:23.882410       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:06:23.882465       1 main.go:301] handling current node
	I1123 09:06:33.883103       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:06:33.883139       1 main.go:301] handling current node
	
	
	==> kube-apiserver [97d4ca7424166579ef526bfc0e4a5605a660d28faf58ea545cda36522a515ff1] <==
	I1123 09:07:04.260439       1 aggregator.go:171] initial CRD sync complete...
	I1123 09:07:04.260449       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:07:04.260457       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:07:04.260463       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:07:04.286682       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:07:04.336476       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:07:04.341469       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 09:07:04.341766       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:07:04.342479       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:07:04.973849       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:07:05.040926       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:07:06.519926       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:07:06.644704       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:07:06.718266       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:07:06.725690       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:07:07.794905       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:07:07.848699       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:07:07.892046       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:07:23.260949       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.9.79"}
	I1123 09:07:30.268362       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.162.187"}
	I1123 09:07:39.931352       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.72.240"}
	E1123 09:07:53.156441       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:34808: use of closed network connection
	E1123 09:08:00.540822       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:34824: use of closed network connection
	I1123 09:08:00.752971       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.43.201"}
	I1123 09:17:04.226615       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [236246867f43481ea2007e9b5d11fda36e46915de7b753584c931d07b225edca] <==
	I1123 09:06:21.178590       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 09:06:21.178619       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 09:06:21.180057       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:06:21.182361       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:06:21.184693       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 09:06:21.191021       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:06:21.191046       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:06:21.191053       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:06:21.193452       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:06:21.194877       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:06:21.203046       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:06:21.207384       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:06:21.208595       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:06:21.209594       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:06:21.209647       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:06:21.209700       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:06:21.209605       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:06:21.210066       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 09:06:21.210106       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:06:21.211394       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:06:21.213480       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:06:21.214856       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 09:06:21.214911       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:06:21.214864       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:06:21.230260       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [98bb6a14eb6e6fb9177735e85e81fe377f9e715f61b39ad346713145c0963b84] <==
	I1123 09:07:07.537508       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 09:07:07.537677       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 09:07:07.542913       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 09:07:07.543039       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:07:07.545469       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:07:07.545553       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:07:07.547742       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:07:07.553297       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:07:07.557526       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:07:07.557613       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:07:07.560660       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:07:07.562983       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:07:07.570911       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:07:07.572465       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 09:07:07.575691       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:07:07.582830       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:07:07.585892       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:07:07.585942       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 09:07:07.585999       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 09:07:07.587170       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:07:07.587350       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:07:07.588044       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:07:07.590578       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 09:07:07.592376       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:07:07.605042       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-proxy [27fdf0922c7b84fa5200180b391ccf11f58de52f71292a4c51318e1ba8d0b407] <==
	I1123 09:06:16.823825       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:06:17.140168       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:06:18.897173       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:06:18.897443       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 09:06:18.897544       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:06:19.054285       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:06:19.054401       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:06:19.066694       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:06:19.067021       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:06:19.067034       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:06:19.076149       1 config.go:200] "Starting service config controller"
	I1123 09:06:19.076179       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:06:19.076207       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:06:19.076212       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:06:19.076224       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:06:19.076229       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:06:19.080096       1 config.go:309] "Starting node config controller"
	I1123 09:06:19.080129       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:06:19.080137       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:06:19.176908       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:06:19.182578       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:06:19.182597       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [2e2129201be394ea4e905bf4beb6f6eb07eb578b851f1fc72a62c9cf933c8d6f] <==
	I1123 09:07:05.504476       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:07:05.587642       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:07:05.689677       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:07:05.689710       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 09:07:05.689793       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:07:05.708762       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:07:05.708873       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:07:05.713225       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:07:05.713662       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:07:05.713683       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:07:05.720366       1 config.go:200] "Starting service config controller"
	I1123 09:07:05.721289       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:07:05.721318       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:07:05.721323       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:07:05.720482       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:07:05.723038       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:07:05.721071       1 config.go:309] "Starting node config controller"
	I1123 09:07:05.723049       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:07:05.723055       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:07:05.821489       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:07:05.823955       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:07:05.824019       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1731721c79a8d80660bdc38f510aaf9242c382f500f90f5e270d6b65b006b0d8] <==
	E1123 09:06:18.769871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:06:18.774810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:06:18.774880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:06:18.774939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:06:18.774987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:06:18.775030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:06:18.775074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:06:18.775119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:06:18.775162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:06:18.775205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:06:18.775250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:06:18.775293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:06:18.775340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:06:18.775473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:06:18.775520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:06:18.775560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:06:18.775625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:06:18.775843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1123 09:06:20.197193       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:06:43.412307       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1123 09:06:43.412341       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1123 09:06:43.412369       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1123 09:06:43.412391       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:06:43.412588       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1123 09:06:43.412611       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [575090a182cadfd095c54fb1b51137bc9ed13dbdb2fa2d25ba64e3dd0d815522] <==
	I1123 09:07:03.659505       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:07:05.185916       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:07:05.185952       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:07:05.206359       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:07:05.206432       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 09:07:05.206449       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 09:07:05.206471       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:07:05.208634       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:07:05.208648       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:07:05.208666       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:07:05.208671       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:07:05.308624       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 09:07:05.308728       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:07:05.308819       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:15:05 functional-605613 kubelet[4028]: E1123 09:15:05.936015    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h8vk4" podUID="b33fadcd-6473-49f8-bfb8-18676c04a3aa"
	Nov 23 09:15:13 functional-605613 kubelet[4028]: E1123 09:15:13.937223    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w25h4" podUID="523799cc-4688-4483-80cb-a19fffb1c015"
	Nov 23 09:15:16 functional-605613 kubelet[4028]: E1123 09:15:16.935811    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h8vk4" podUID="b33fadcd-6473-49f8-bfb8-18676c04a3aa"
	Nov 23 09:15:25 functional-605613 kubelet[4028]: E1123 09:15:25.936605    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w25h4" podUID="523799cc-4688-4483-80cb-a19fffb1c015"
	Nov 23 09:15:27 functional-605613 kubelet[4028]: E1123 09:15:27.937174    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h8vk4" podUID="b33fadcd-6473-49f8-bfb8-18676c04a3aa"
	Nov 23 09:15:37 functional-605613 kubelet[4028]: E1123 09:15:37.936115    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w25h4" podUID="523799cc-4688-4483-80cb-a19fffb1c015"
	Nov 23 09:15:39 functional-605613 kubelet[4028]: E1123 09:15:39.936693    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h8vk4" podUID="b33fadcd-6473-49f8-bfb8-18676c04a3aa"
	Nov 23 09:15:50 functional-605613 kubelet[4028]: E1123 09:15:50.936381    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h8vk4" podUID="b33fadcd-6473-49f8-bfb8-18676c04a3aa"
	Nov 23 09:15:50 functional-605613 kubelet[4028]: E1123 09:15:50.936908    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w25h4" podUID="523799cc-4688-4483-80cb-a19fffb1c015"
	Nov 23 09:16:04 functional-605613 kubelet[4028]: E1123 09:16:04.936610    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h8vk4" podUID="b33fadcd-6473-49f8-bfb8-18676c04a3aa"
	Nov 23 09:16:05 functional-605613 kubelet[4028]: E1123 09:16:05.943371    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w25h4" podUID="523799cc-4688-4483-80cb-a19fffb1c015"
	Nov 23 09:16:16 functional-605613 kubelet[4028]: E1123 09:16:16.936503    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h8vk4" podUID="b33fadcd-6473-49f8-bfb8-18676c04a3aa"
	Nov 23 09:16:19 functional-605613 kubelet[4028]: E1123 09:16:19.937147    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w25h4" podUID="523799cc-4688-4483-80cb-a19fffb1c015"
	Nov 23 09:16:27 functional-605613 kubelet[4028]: E1123 09:16:27.936297    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h8vk4" podUID="b33fadcd-6473-49f8-bfb8-18676c04a3aa"
	Nov 23 09:16:34 functional-605613 kubelet[4028]: E1123 09:16:34.936375    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w25h4" podUID="523799cc-4688-4483-80cb-a19fffb1c015"
	Nov 23 09:16:42 functional-605613 kubelet[4028]: E1123 09:16:42.935547    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h8vk4" podUID="b33fadcd-6473-49f8-bfb8-18676c04a3aa"
	Nov 23 09:16:46 functional-605613 kubelet[4028]: E1123 09:16:46.935808    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w25h4" podUID="523799cc-4688-4483-80cb-a19fffb1c015"
	Nov 23 09:16:53 functional-605613 kubelet[4028]: E1123 09:16:53.936732    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h8vk4" podUID="b33fadcd-6473-49f8-bfb8-18676c04a3aa"
	Nov 23 09:16:58 functional-605613 kubelet[4028]: E1123 09:16:58.936352    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w25h4" podUID="523799cc-4688-4483-80cb-a19fffb1c015"
	Nov 23 09:17:08 functional-605613 kubelet[4028]: E1123 09:17:08.936381    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h8vk4" podUID="b33fadcd-6473-49f8-bfb8-18676c04a3aa"
	Nov 23 09:17:09 functional-605613 kubelet[4028]: E1123 09:17:09.937899    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w25h4" podUID="523799cc-4688-4483-80cb-a19fffb1c015"
	Nov 23 09:17:21 functional-605613 kubelet[4028]: E1123 09:17:21.936318    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h8vk4" podUID="b33fadcd-6473-49f8-bfb8-18676c04a3aa"
	Nov 23 09:17:24 functional-605613 kubelet[4028]: E1123 09:17:24.936240    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w25h4" podUID="523799cc-4688-4483-80cb-a19fffb1c015"
	Nov 23 09:17:33 functional-605613 kubelet[4028]: E1123 09:17:33.936482    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-h8vk4" podUID="b33fadcd-6473-49f8-bfb8-18676c04a3aa"
	Nov 23 09:17:36 functional-605613 kubelet[4028]: E1123 09:17:36.936126    4028 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-w25h4" podUID="523799cc-4688-4483-80cb-a19fffb1c015"
	
	
	==> storage-provisioner [50fab0b70d3fd58d6dd41b0f8f89fef22e46d3f1a454dacb862916b1a3a01dd8] <==
	I1123 09:06:16.134975       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:06:18.958386       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:06:18.958452       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:06:18.981707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:06:22.436599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:06:26.697193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:06:30.295314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:06:33.349455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:06:36.371469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:06:36.376361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:06:36.376680       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:06:36.376881       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-605613_19fa33e9-cdc5-44a0-8afc-4ad16671c130!
	I1123 09:06:36.377240       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a12aa536-fd26-419d-9a49-3b01fb563442", APIVersion:"v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-605613_19fa33e9-cdc5-44a0-8afc-4ad16671c130 became leader
	W1123 09:06:36.385275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:06:36.390232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:06:36.478174       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-605613_19fa33e9-cdc5-44a0-8afc-4ad16671c130!
	W1123 09:06:38.393140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:06:38.398416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:06:40.401194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:06:40.405588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:06:42.409784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:06:42.414809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a37b4586e953a366972cb6121ed84cae1d5d07ec188ed49f2ab4754673436013] <==
	W1123 09:17:17.689231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:19.692960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:19.699927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:21.702711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:21.707121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:23.710414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:23.714623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:25.718025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:25.725061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:27.727770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:27.732181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:29.735710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:29.740061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:31.742855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:31.749516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:33.752230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:33.756796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:35.760080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:35.767668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:37.771151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:37.777926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:39.780919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:39.785480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:41.789007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:17:41.795108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-605613 -n functional-605613
helpers_test.go:269: (dbg) Run:  kubectl --context functional-605613 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-h8vk4 hello-node-connect-7d85dfc575-w25h4
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-605613 describe pod hello-node-75c85bcc94-h8vk4 hello-node-connect-7d85dfc575-w25h4
helpers_test.go:290: (dbg) kubectl --context functional-605613 describe pod hello-node-75c85bcc94-h8vk4 hello-node-connect-7d85dfc575-w25h4:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-h8vk4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-605613/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 09:08:00 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lqh4v (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lqh4v:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m42s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-h8vk4 to functional-605613
	  Normal   Pulling    6m48s (x5 over 9m42s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m48s (x5 over 9m42s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m48s (x5 over 9m42s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m35s (x20 over 9m42s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m22s (x21 over 9m42s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-w25h4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-605613/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 09:07:39 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6d7pg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6d7pg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-w25h4 to functional-605613
	  Normal   Pulling    7m10s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m10s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m10s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    5m1s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     5m1s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image load --daemon kicbase/echo-server:functional-605613 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-605613" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image load --daemon kicbase/echo-server:functional-605613 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-605613" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-605613
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image load --daemon kicbase/echo-server:functional-605613 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-605613" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image save kicbase/echo-server:functional-605613 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1123 09:07:34.929989  308468 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:07:34.930199  308468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:07:34.930212  308468 out.go:374] Setting ErrFile to fd 2...
	I1123 09:07:34.930217  308468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:07:34.930489  308468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:07:34.931135  308468 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:34.931262  308468 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:34.931777  308468 cli_runner.go:164] Run: docker container inspect functional-605613 --format={{.State.Status}}
	I1123 09:07:34.956487  308468 ssh_runner.go:195] Run: systemctl --version
	I1123 09:07:34.956549  308468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
	I1123 09:07:34.976504  308468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/functional-605613/id_rsa Username:docker}
	I1123 09:07:35.088247  308468 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1123 09:07:35.088314  308468 cache_images.go:255] Failed to load cached images for "functional-605613": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1123 09:07:35.088340  308468 cache_images.go:267] failed pushing to: functional-605613

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-605613
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image save --daemon kicbase/echo-server:functional-605613 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-605613
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-605613: exit status 1 (27.412383ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-605613

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-605613

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-605613 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-605613 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-h8vk4" [b33fadcd-6473-49f8-bfb8-18676c04a3aa] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1123 09:10:14.909550  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:10:42.610539  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:15:14.909524  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-605613 -n functional-605613
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-23 09:18:01.212268925 +0000 UTC m=+1242.068250820
functional_test.go:1460: (dbg) Run:  kubectl --context functional-605613 describe po hello-node-75c85bcc94-h8vk4 -n default
functional_test.go:1460: (dbg) kubectl --context functional-605613 describe po hello-node-75c85bcc94-h8vk4 -n default:
Name:             hello-node-75c85bcc94-h8vk4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-605613/192.168.49.2
Start Time:       Sun, 23 Nov 2025 09:08:00 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lqh4v (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lqh4v:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-h8vk4 to functional-605613
Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m53s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-605613 logs hello-node-75c85bcc94-h8vk4 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-605613 logs hello-node-75c85bcc94-h8vk4 -n default: exit status 1 (104.793336ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-h8vk4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-605613 logs hello-node-75c85bcc94-h8vk4 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-605613 service --namespace=default --https --url hello-node: exit status 115 (525.807147ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32524
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-605613 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-605613 service hello-node --url --format={{.IP}}: exit status 115 (510.295922ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-605613 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-605613 service hello-node --url: exit status 115 (532.702196ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32524
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-605613 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32524
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (413.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 stop --alsologtostderr -v 5
E1123 09:23:51.754709  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 stop --alsologtostderr -v 5: (27.683717318s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 start --wait true --alsologtostderr -v 5
E1123 09:25:13.677640  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:25:14.909068  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:29.809178  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:27:57.519256  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:30:14.909210  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-857095 start --wait true --alsologtostderr -v 5: exit status 80 (6m22.166631437s)

                                                
                                                
-- stdout --
	* [ha-857095] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-857095" primary control-plane node in "ha-857095" cluster
	* Pulling base image v0.0.48-1763789673-21948 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	* Enabled addons: 
	
	* Starting "ha-857095-m02" control-plane node in "ha-857095" cluster
	* Pulling base image v0.0.48-1763789673-21948 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-857095-m03" control-plane node in "ha-857095" cluster
	* Pulling base image v0.0.48-1763789673-21948 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	* Starting "ha-857095-m04" worker node in "ha-857095" cluster
	* Pulling base image v0.0.48-1763789673-21948 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	  - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:23:56.195666  332015 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:23:56.195782  332015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:23:56.195793  332015 out.go:374] Setting ErrFile to fd 2...
	I1123 09:23:56.195799  332015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:23:56.196022  332015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:23:56.196372  332015 out.go:368] Setting JSON to false
	I1123 09:23:56.197168  332015 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7585,"bootTime":1763882251,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 09:23:56.197241  332015 start.go:143] virtualization:  
	I1123 09:23:56.202491  332015 out.go:179] * [ha-857095] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:23:56.205469  332015 notify.go:221] Checking for updates...
	I1123 09:23:56.205985  332015 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:23:56.209103  332015 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:23:56.212257  332015 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:23:56.214935  332015 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 09:23:56.217823  332015 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:23:56.220754  332015 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:23:56.224090  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:23:56.224192  332015 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:23:56.248091  332015 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:23:56.248221  332015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:23:56.316560  332015 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-23 09:23:56.306152339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:23:56.316667  332015 docker.go:319] overlay module found
	I1123 09:23:56.319905  332015 out.go:179] * Using the docker driver based on existing profile
	I1123 09:23:56.322883  332015 start.go:309] selected driver: docker
	I1123 09:23:56.322910  332015 start.go:927] validating driver "docker" against &{Name:ha-857095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:23:56.323070  332015 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:23:56.323169  332015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:23:56.383495  332015 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-23 09:23:56.374562034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:23:56.383895  332015 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:23:56.383914  332015 cni.go:84] Creating CNI manager for ""
	I1123 09:23:56.383965  332015 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1123 09:23:56.384008  332015 start.go:353] cluster config:
	{Name:ha-857095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:23:56.387318  332015 out.go:179] * Starting "ha-857095" primary control-plane node in "ha-857095" cluster
	I1123 09:23:56.390204  332015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:23:56.393222  332015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:23:56.395941  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:23:56.395987  332015 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 09:23:56.395997  332015 cache.go:65] Caching tarball of preloaded images
	I1123 09:23:56.396063  332015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:23:56.396081  332015 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:23:56.396092  332015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:23:56.396244  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:23:56.413619  332015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:23:56.413643  332015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:23:56.413663  332015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:23:56.413694  332015 start.go:360] acquireMachinesLock for ha-857095: {Name:mk7ea4c3d6888276233865fa5f92414123c08091 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:23:56.413754  332015 start.go:364] duration metric: took 36.201µs to acquireMachinesLock for "ha-857095"
	I1123 09:23:56.413778  332015 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:23:56.413787  332015 fix.go:54] fixHost starting: 
	I1123 09:23:56.414049  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:23:56.430596  332015 fix.go:112] recreateIfNeeded on ha-857095: state=Stopped err=<nil>
	W1123 09:23:56.430627  332015 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:23:56.433965  332015 out.go:252] * Restarting existing docker container for "ha-857095" ...
	I1123 09:23:56.434061  332015 cli_runner.go:164] Run: docker start ha-857095
	I1123 09:23:56.669371  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:23:56.694309  332015 kic.go:430] container "ha-857095" state is running.
	I1123 09:23:56.694718  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095
	I1123 09:23:56.714939  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:23:56.715179  332015 machine.go:94] provisionDockerMachine start ...
	I1123 09:23:56.715249  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:23:56.739434  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:56.739774  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33182 <nil> <nil>}
	I1123 09:23:56.739790  332015 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:23:56.740583  332015 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45372->127.0.0.1:33182: read: connection reset by peer
	I1123 09:23:59.888928  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095
	
	I1123 09:23:59.888954  332015 ubuntu.go:182] provisioning hostname "ha-857095"
	I1123 09:23:59.889018  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:23:59.906579  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:59.906895  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33182 <nil> <nil>}
	I1123 09:23:59.906906  332015 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-857095 && echo "ha-857095" | sudo tee /etc/hostname
	I1123 09:24:00.143191  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095
	
	I1123 09:24:00.143304  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:00.200109  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:00.200444  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33182 <nil> <nil>}
	I1123 09:24:00.200460  332015 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857095/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:24:00.391079  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:24:00.391118  332015 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:24:00.391140  332015 ubuntu.go:190] setting up certificates
	I1123 09:24:00.391151  332015 provision.go:84] configureAuth start
	I1123 09:24:00.391221  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095
	I1123 09:24:00.416269  332015 provision.go:143] copyHostCerts
	I1123 09:24:00.416328  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:24:00.416373  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:24:00.416396  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:24:00.416502  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:24:00.416616  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:24:00.416643  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:24:00.416649  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:24:00.416685  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:24:00.416740  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:24:00.416764  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:24:00.416769  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:24:00.416796  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:24:00.416852  332015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.ha-857095 san=[127.0.0.1 192.168.49.2 ha-857095 localhost minikube]
	I1123 09:24:00.654716  332015 provision.go:177] copyRemoteCerts
	I1123 09:24:00.654793  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:24:00.654834  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:00.677057  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:00.781001  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1123 09:24:00.781107  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:24:00.798881  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1123 09:24:00.798961  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1123 09:24:00.816589  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1123 09:24:00.816669  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:24:00.834536  332015 provision.go:87] duration metric: took 443.371132ms to configureAuth
	I1123 09:24:00.834605  332015 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:24:00.834885  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:00.835007  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:00.852135  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:00.852465  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33182 <nil> <nil>}
	I1123 09:24:00.852484  332015 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:24:01.230722  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:24:01.230745  332015 machine.go:97] duration metric: took 4.515545369s to provisionDockerMachine
	I1123 09:24:01.230757  332015 start.go:293] postStartSetup for "ha-857095" (driver="docker")
	I1123 09:24:01.230784  332015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:24:01.230849  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:24:01.230895  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:01.255652  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:01.361493  332015 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:24:01.364819  332015 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:24:01.364849  332015 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:24:01.364861  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:24:01.364917  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:24:01.364992  332015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:24:01.365000  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /etc/ssl/certs/2849042.pem
	I1123 09:24:01.365102  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:24:01.373236  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:24:01.391175  332015 start.go:296] duration metric: took 160.402274ms for postStartSetup
	I1123 09:24:01.391305  332015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:24:01.391349  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:01.408403  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:01.514432  332015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:24:01.519130  332015 fix.go:56] duration metric: took 5.105336191s for fixHost
	I1123 09:24:01.519158  332015 start.go:83] releasing machines lock for "ha-857095", held for 5.105389919s
	I1123 09:24:01.519225  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095
	I1123 09:24:01.535905  332015 ssh_runner.go:195] Run: cat /version.json
	I1123 09:24:01.535965  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:01.536231  332015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:24:01.536282  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:01.562880  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:01.565249  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:01.665009  332015 ssh_runner.go:195] Run: systemctl --version
	I1123 09:24:01.757828  332015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:24:01.794910  332015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:24:01.799455  332015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:24:01.799605  332015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:24:01.807720  332015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:24:01.807746  332015 start.go:496] detecting cgroup driver to use...
	I1123 09:24:01.807800  332015 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:24:01.807878  332015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:24:01.822720  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:24:01.836248  332015 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:24:01.836404  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:24:01.853658  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:24:01.867264  332015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:24:01.974745  332015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:24:02.101306  332015 docker.go:234] disabling docker service ...
	I1123 09:24:02.101464  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:24:02.117932  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:24:02.131548  332015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:24:02.243604  332015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:24:02.362672  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:24:02.376516  332015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:24:02.391962  332015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:24:02.392048  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.400619  332015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:24:02.400698  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.410062  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.419774  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.429277  332015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:24:02.438031  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.447555  332015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.455833  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.464518  332015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:24:02.472029  332015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:24:02.479828  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:24:02.606510  332015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:24:02.773593  332015 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:24:02.773712  332015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:24:02.778273  332015 start.go:564] Will wait 60s for crictl version
	I1123 09:24:02.778386  332015 ssh_runner.go:195] Run: which crictl
	I1123 09:24:02.782031  332015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:24:02.805950  332015 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:24:02.806105  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:24:02.837219  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:24:02.868046  332015 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:24:02.870882  332015 cli_runner.go:164] Run: docker network inspect ha-857095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:24:02.888727  332015 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:24:02.893087  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:24:02.903114  332015 kubeadm.go:884] updating cluster {Name:ha-857095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:24:02.903266  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:24:02.903340  332015 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:24:02.938058  332015 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:24:02.938082  332015 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:24:02.938142  332015 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:24:02.965340  332015 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:24:02.965366  332015 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:24:02.965376  332015 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 09:24:02.965526  332015 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-857095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:24:02.965621  332015 ssh_runner.go:195] Run: crio config
	I1123 09:24:03.024329  332015 cni.go:84] Creating CNI manager for ""
	I1123 09:24:03.024405  332015 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1123 09:24:03.024439  332015 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:24:03.024493  332015 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-857095 NodeName:ha-857095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:24:03.024670  332015 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-857095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:24:03.024706  332015 kube-vip.go:115] generating kube-vip config ...
	I1123 09:24:03.024788  332015 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1123 09:24:03.037111  332015 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:24:03.037290  332015 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1123 09:24:03.037395  332015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:24:03.045237  332015 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:24:03.045328  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1123 09:24:03.053429  332015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1123 09:24:03.066204  332015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:24:03.078929  332015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1123 09:24:03.092229  332015 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1123 09:24:03.104792  332015 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1123 09:24:03.108474  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:24:03.118280  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:24:03.231167  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:24:03.246187  332015 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095 for IP: 192.168.49.2
	I1123 09:24:03.246257  332015 certs.go:195] generating shared ca certs ...
	I1123 09:24:03.246288  332015 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:03.246475  332015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:24:03.246549  332015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:24:03.246586  332015 certs.go:257] generating profile certs ...
	I1123 09:24:03.246711  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key
	I1123 09:24:03.246768  332015 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.fbc14aa1
	I1123 09:24:03.246799  332015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt.fbc14aa1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1123 09:24:03.300262  332015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt.fbc14aa1 ...
	I1123 09:24:03.300340  332015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt.fbc14aa1: {Name:mk96366c0e17998ceef956dc2b188d7321ecf01f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:03.300600  332015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.fbc14aa1 ...
	I1123 09:24:03.300633  332015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.fbc14aa1: {Name:mk3d8a4e6dd8546bed5a8d4ed49833bd7f302bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:03.300779  332015 certs.go:382] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt.fbc14aa1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt
	I1123 09:24:03.300944  332015 certs.go:386] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.fbc14aa1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key
	I1123 09:24:03.301074  332015 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key
	I1123 09:24:03.301086  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1123 09:24:03.301100  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1123 09:24:03.301112  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1123 09:24:03.301123  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1123 09:24:03.301134  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1123 09:24:03.301149  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1123 09:24:03.301161  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1123 09:24:03.301173  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1123 09:24:03.301228  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:24:03.301260  332015 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:24:03.301268  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:24:03.301296  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:24:03.301321  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:24:03.301343  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:24:03.301386  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:24:03.301443  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /usr/share/ca-certificates/2849042.pem
	I1123 09:24:03.301458  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:24:03.301469  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem -> /usr/share/ca-certificates/284904.pem
	I1123 09:24:03.302078  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:24:03.321449  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:24:03.344480  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:24:03.366943  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:24:03.388807  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:24:03.417016  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:24:03.441422  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:24:03.466546  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:24:03.486297  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:24:03.505713  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:24:03.523380  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:24:03.541787  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:24:03.553943  332015 ssh_runner.go:195] Run: openssl version
	I1123 09:24:03.560161  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:24:03.569902  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:24:03.574290  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:24:03.574428  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:24:03.615270  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:24:03.622991  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:24:03.631168  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:24:03.634814  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:24:03.634879  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:24:03.675522  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:24:03.683531  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:24:03.692221  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:24:03.695819  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:24:03.695881  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:24:03.736556  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:24:03.744109  332015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:24:03.747786  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:24:03.788604  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:24:03.830353  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:24:03.884091  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:24:03.938208  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:24:03.984397  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:24:04.045856  332015 kubeadm.go:401] StartCluster: {Name:ha-857095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:24:04.046023  332015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:24:04.046122  332015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:24:04.075084  332015 cri.go:89] found id: "3f803f0d2708c2458335864b38cbe1261399f59c726a34053cba0f4d0c4267e2"
	I1123 09:24:04.075156  332015 cri.go:89] found id: ""
	I1123 09:24:04.075240  332015 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:24:04.094693  332015 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:24:04Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:24:04.094840  332015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:24:04.120167  332015 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:24:04.120234  332015 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:24:04.120315  332015 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:24:04.133113  332015 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:24:04.133681  332015 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-857095" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:24:04.133848  332015 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "ha-857095" cluster setting kubeconfig missing "ha-857095" context setting]
	I1123 09:24:04.134501  332015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:04.135077  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:24:04.135715  332015 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 09:24:04.135788  332015 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 09:24:04.135810  332015 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 09:24:04.135854  332015 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 09:24:04.135880  332015 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 09:24:04.135767  332015 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1123 09:24:04.137370  332015 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:24:04.163446  332015 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1123 09:24:04.163519  332015 kubeadm.go:602] duration metric: took 43.265409ms to restartPrimaryControlPlane
	I1123 09:24:04.163544  332015 kubeadm.go:403] duration metric: took 117.700121ms to StartCluster
	I1123 09:24:04.163589  332015 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:04.163671  332015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:24:04.164252  332015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:04.164494  332015 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:24:04.164539  332015 start.go:242] waiting for startup goroutines ...
	I1123 09:24:04.164560  332015 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:24:04.165096  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:04.170570  332015 out.go:179] * Enabled addons: 
	I1123 09:24:04.174575  332015 addons.go:530] duration metric: took 10.006073ms for enable addons: enabled=[]
	I1123 09:24:04.174659  332015 start.go:247] waiting for cluster config update ...
	I1123 09:24:04.174681  332015 start.go:256] writing updated cluster config ...
	I1123 09:24:04.178275  332015 out.go:203] 
	I1123 09:24:04.181916  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:04.182093  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:24:04.188964  332015 out.go:179] * Starting "ha-857095-m02" control-plane node in "ha-857095" cluster
	I1123 09:24:04.192293  332015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:24:04.195633  332015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:24:04.198557  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:24:04.198653  332015 cache.go:65] Caching tarball of preloaded images
	I1123 09:24:04.198625  332015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:24:04.198998  332015 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:24:04.199035  332015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:24:04.199196  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:24:04.236750  332015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:24:04.236768  332015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:24:04.236781  332015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:24:04.236803  332015 start.go:360] acquireMachinesLock for ha-857095-m02: {Name:mk302f2371cf69337e911dfb76261e6364d80001 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:24:04.236853  332015 start.go:364] duration metric: took 36.242µs to acquireMachinesLock for "ha-857095-m02"
	I1123 09:24:04.236872  332015 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:24:04.236877  332015 fix.go:54] fixHost starting: m02
	I1123 09:24:04.237131  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m02 --format={{.State.Status}}
	I1123 09:24:04.264568  332015 fix.go:112] recreateIfNeeded on ha-857095-m02: state=Stopped err=<nil>
	W1123 09:24:04.264592  332015 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:24:04.268071  332015 out.go:252] * Restarting existing docker container for "ha-857095-m02" ...
	I1123 09:24:04.268150  332015 cli_runner.go:164] Run: docker start ha-857095-m02
	I1123 09:24:04.652204  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m02 --format={{.State.Status}}
	I1123 09:24:04.680714  332015 kic.go:430] container "ha-857095-m02" state is running.
	I1123 09:24:04.681090  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m02
	I1123 09:24:04.707062  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:24:04.707317  332015 machine.go:94] provisionDockerMachine start ...
	I1123 09:24:04.707387  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:04.741254  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:04.741586  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33187 <nil> <nil>}
	I1123 09:24:04.741597  332015 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:24:04.742229  332015 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34184->127.0.0.1:33187: read: connection reset by peer
	I1123 09:24:08.002494  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m02
	
	I1123 09:24:08.002568  332015 ubuntu.go:182] provisioning hostname "ha-857095-m02"
	I1123 09:24:08.002678  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:08.029049  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:08.029348  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33187 <nil> <nil>}
	I1123 09:24:08.029358  332015 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-857095-m02 && echo "ha-857095-m02" | sudo tee /etc/hostname
	I1123 09:24:08.253783  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m02
	
	I1123 09:24:08.253924  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:08.291114  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:08.291434  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33187 <nil> <nil>}
	I1123 09:24:08.291450  332015 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857095-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857095-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857095-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:24:08.491050  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:24:08.491119  332015 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:24:08.491152  332015 ubuntu.go:190] setting up certificates
	I1123 09:24:08.491194  332015 provision.go:84] configureAuth start
	I1123 09:24:08.491321  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m02
	I1123 09:24:08.526937  332015 provision.go:143] copyHostCerts
	I1123 09:24:08.526984  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:24:08.527020  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:24:08.527027  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:24:08.527102  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:24:08.527176  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:24:08.527192  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:24:08.527197  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:24:08.527222  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:24:08.527259  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:24:08.527274  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:24:08.527278  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:24:08.527300  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:24:08.527343  332015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.ha-857095-m02 san=[127.0.0.1 192.168.49.3 ha-857095-m02 localhost minikube]
	I1123 09:24:09.262765  332015 provision.go:177] copyRemoteCerts
	I1123 09:24:09.262880  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:24:09.262954  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:09.280151  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:09.397744  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1123 09:24:09.397799  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:24:09.444274  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1123 09:24:09.444335  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 09:24:09.474176  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1123 09:24:09.474229  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:24:09.500995  332015 provision.go:87] duration metric: took 1.009770735s to configureAuth
	I1123 09:24:09.501071  332015 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:24:09.501370  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:09.501570  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:09.541669  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:09.541983  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33187 <nil> <nil>}
	I1123 09:24:09.541997  332015 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:24:10.717183  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:24:10.717209  332015 machine.go:97] duration metric: took 6.009881771s to provisionDockerMachine
	I1123 09:24:10.717221  332015 start.go:293] postStartSetup for "ha-857095-m02" (driver="docker")
	I1123 09:24:10.717231  332015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:24:10.717289  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:24:10.717340  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:10.743261  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:10.873831  332015 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:24:10.882112  332015 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:24:10.882138  332015 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:24:10.882150  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:24:10.882203  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:24:10.882279  332015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:24:10.882286  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /etc/ssl/certs/2849042.pem
	I1123 09:24:10.882384  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:24:10.897705  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:24:10.928947  332015 start.go:296] duration metric: took 211.710763ms for postStartSetup
	I1123 09:24:10.929078  332015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:24:10.929161  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:10.965095  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:11.077996  332015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:24:11.083158  332015 fix.go:56] duration metric: took 6.846271288s for fixHost
	I1123 09:24:11.083241  332015 start.go:83] releasing machines lock for "ha-857095-m02", held for 6.846378251s
	I1123 09:24:11.083359  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m02
	I1123 09:24:11.142549  332015 out.go:179] * Found network options:
	I1123 09:24:11.145622  332015 out.go:179]   - NO_PROXY=192.168.49.2
	W1123 09:24:11.148481  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:24:11.148526  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	I1123 09:24:11.148594  332015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:24:11.148633  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:11.148887  332015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:24:11.148950  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:11.179109  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:11.188793  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:11.691645  332015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:24:11.715317  332015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:24:11.715396  332015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:24:11.749537  332015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:24:11.749567  332015 start.go:496] detecting cgroup driver to use...
	I1123 09:24:11.749599  332015 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:24:11.749652  332015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:24:11.790996  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:24:11.823649  332015 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:24:11.823714  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:24:11.850236  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:24:11.868366  332015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:24:12.144454  332015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:24:12.480952  332015 docker.go:234] disabling docker service ...
	I1123 09:24:12.481086  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:24:12.566871  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:24:12.598895  332015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:24:12.943816  332015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:24:13.198846  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:24:13.220755  332015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:24:13.238071  332015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:24:13.238185  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.246445  332015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:24:13.246513  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.254941  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.263305  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.271300  332015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:24:13.278821  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.288129  332015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.296195  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.304236  332015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:24:13.311253  332015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:24:13.318479  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:24:13.535394  332015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:25:43.810612  332015 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.275182672s)
	I1123 09:25:43.810639  332015 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:25:43.810701  332015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:25:43.814933  332015 start.go:564] Will wait 60s for crictl version
	I1123 09:25:43.814992  332015 ssh_runner.go:195] Run: which crictl
	I1123 09:25:43.818922  332015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:25:43.846107  332015 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:25:43.846200  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:25:43.877706  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:25:43.909681  332015 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:25:43.912736  332015 out.go:179]   - env NO_PROXY=192.168.49.2
	I1123 09:25:43.915738  332015 cli_runner.go:164] Run: docker network inspect ha-857095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:25:43.931587  332015 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:25:43.935281  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:25:43.944694  332015 mustload.go:66] Loading cluster: ha-857095
	I1123 09:25:43.944941  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:43.945204  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:25:43.962501  332015 host.go:66] Checking if "ha-857095" exists ...
	I1123 09:25:43.962775  332015 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095 for IP: 192.168.49.3
	I1123 09:25:43.962789  332015 certs.go:195] generating shared ca certs ...
	I1123 09:25:43.962805  332015 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:25:43.962924  332015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:25:43.962987  332015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:25:43.962999  332015 certs.go:257] generating profile certs ...
	I1123 09:25:43.963077  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key
	I1123 09:25:43.963146  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.66daad91
	I1123 09:25:43.963186  332015 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key
	I1123 09:25:43.963194  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1123 09:25:43.963206  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1123 09:25:43.963217  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1123 09:25:43.963237  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1123 09:25:43.963248  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1123 09:25:43.963258  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1123 09:25:43.963270  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1123 09:25:43.963281  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1123 09:25:43.963328  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:25:43.963357  332015 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:25:43.963369  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:25:43.963395  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:25:43.963419  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:25:43.963442  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:25:43.963488  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:25:43.963520  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem -> /usr/share/ca-certificates/284904.pem
	I1123 09:25:43.963531  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /usr/share/ca-certificates/2849042.pem
	I1123 09:25:43.963542  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:43.963592  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:25:43.980751  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:25:44.081825  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1123 09:25:44.085802  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1123 09:25:44.094194  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1123 09:25:44.097956  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1123 09:25:44.106256  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1123 09:25:44.110273  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1123 09:25:44.118652  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1123 09:25:44.122439  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1123 09:25:44.130532  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1123 09:25:44.133997  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1123 09:25:44.142041  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1123 09:25:44.145750  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1123 09:25:44.154268  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:25:44.174536  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:25:44.191976  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:25:44.210168  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:25:44.228737  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:25:44.246711  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:25:44.264397  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:25:44.282548  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:25:44.301229  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:25:44.321400  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:25:44.340621  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:25:44.360219  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1123 09:25:44.374691  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1123 09:25:44.388106  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1123 09:25:44.402723  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1123 09:25:44.416062  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1123 09:25:44.429635  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1123 09:25:44.443050  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1123 09:25:44.456835  332015 ssh_runner.go:195] Run: openssl version
	I1123 09:25:44.463525  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:25:44.472737  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:25:44.476731  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:25:44.476844  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:25:44.517979  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:25:44.525850  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:25:44.536453  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:25:44.542534  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:25:44.542604  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:25:44.599744  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:25:44.613671  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:25:44.626248  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:44.630279  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:44.630347  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:44.717285  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:25:44.727653  332015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:25:44.734478  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:25:44.781588  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:25:44.834781  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:25:44.900074  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:25:44.968766  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:25:45.046196  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:25:45.126791  332015 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1123 09:25:45.126936  332015 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-857095-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:25:45.126971  332015 kube-vip.go:115] generating kube-vip config ...
	I1123 09:25:45.127039  332015 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1123 09:25:45.160018  332015 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:25:45.160101  332015 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1123 09:25:45.160193  332015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:25:45.182092  332015 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:25:45.182333  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1123 09:25:45.194768  332015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 09:25:45.221310  332015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:25:45.267324  332015 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1123 09:25:45.295755  332015 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1123 09:25:45.299778  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:25:45.311639  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:25:45.546785  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:25:45.562952  332015 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:25:45.563297  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:45.567237  332015 out.go:179] * Verifying Kubernetes components...
	I1123 09:25:45.570200  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:25:45.791204  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:25:45.805221  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1123 09:25:45.805300  332015 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1123 09:25:45.805568  332015 node_ready.go:35] waiting up to 6m0s for node "ha-857095-m02" to be "Ready" ...
	I1123 09:25:46.975594  332015 node_ready.go:49] node "ha-857095-m02" is "Ready"
	I1123 09:25:46.975708  332015 node_ready.go:38] duration metric: took 1.170108444s for node "ha-857095-m02" to be "Ready" ...
	I1123 09:25:46.975722  332015 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:25:46.979095  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:25:47.015790  332015 api_server.go:72] duration metric: took 1.452452994s to wait for apiserver process to appear ...
	I1123 09:25:47.015827  332015 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:25:47.015848  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:47.055731  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:25:47.055771  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:25:47.516044  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:47.524553  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:47.524596  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:48.015961  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:48.027139  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:48.027189  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:48.516751  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:48.530354  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:48.530386  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:49.015933  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:49.026181  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:49.026224  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:49.516868  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:49.544816  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:49.544849  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:50.015977  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:50.031576  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 09:25:50.034154  332015 api_server.go:141] control plane version: v1.34.1
	I1123 09:25:50.034191  332015 api_server.go:131] duration metric: took 3.018357536s to wait for apiserver health ...
	I1123 09:25:50.034201  332015 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:25:50.133527  332015 system_pods.go:59] 26 kube-system pods found
	I1123 09:25:50.133574  332015 system_pods.go:61] "coredns-66bc5c9577-gqskt" [9ec3e73a-4033-41ae-927a-50584a3e9653] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:25:50.133583  332015 system_pods.go:61] "coredns-66bc5c9577-kqvhl" [bcbbf58b-9d2d-4a51-b4c1-bfec16447df5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:25:50.133590  332015 system_pods.go:61] "etcd-ha-857095" [3eaffe71-9ce6-4a9b-8530-1de6a4ec8773] Running
	I1123 09:25:50.133596  332015 system_pods.go:61] "etcd-ha-857095-m02" [5f8628c9-5725-4ca9-9622-b42a9b63c833] Running
	I1123 09:25:50.133600  332015 system_pods.go:61] "etcd-ha-857095-m03" [2ec71863-ebd8-45ca-9f19-707503671154] Running
	I1123 09:25:50.133603  332015 system_pods.go:61] "kindnet-8bs9t" [d9dee210-2075-4095-8540-c13c401e5a68] Running
	I1123 09:25:50.133607  332015 system_pods.go:61] "kindnet-ls8hm" [b7c7ef9d-ebdd-4bd4-97e6-595b84787117] Running
	I1123 09:25:50.133622  332015 system_pods.go:61] "kindnet-r7p2c" [a4f419f5-ecbc-48e6-8f98-732c4ac5a977] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:25:50.133636  332015 system_pods.go:61] "kindnet-v5cch" [4bfed9c2-b321-43a0-a18b-c867696cf4cb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:25:50.133642  332015 system_pods.go:61] "kube-apiserver-ha-857095" [697606bd-c111-4922-adda-6902a7f40915] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:25:50.133647  332015 system_pods.go:61] "kube-apiserver-ha-857095-m02" [8516bae2-f830-4a82-aa30-dbd7bf657b52] Running
	I1123 09:25:50.133659  332015 system_pods.go:61] "kube-apiserver-ha-857095-m03" [9f6f5d7d-9bba-4b26-b928-05119bbc98af] Running
	I1123 09:25:50.133666  332015 system_pods.go:61] "kube-controller-manager-ha-857095" [026d1873-0078-4c87-a9c1-b5a615844bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:25:50.133671  332015 system_pods.go:61] "kube-controller-manager-ha-857095-m02" [51f4d1ee-3b47-49f2-907e-68598e7d88e1] Running
	I1123 09:25:50.133694  332015 system_pods.go:61] "kube-controller-manager-ha-857095-m03" [234e7d83-1430-4ee4-91e4-73bf5e7221dc] Running
	I1123 09:25:50.133700  332015 system_pods.go:61] "kube-proxy-275zc" [b46e4648-46c6-4f04-85bc-bbfd4aedc821] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:25:50.133704  332015 system_pods.go:61] "kube-proxy-6k46z" [f2387038-f806-4417-961a-cf4390f4b4a5] Running
	I1123 09:25:50.133712  332015 system_pods.go:61] "kube-proxy-9qgbr" [a03beba1-4074-45e0-a3a0-a4cf0917b9a8] Running
	I1123 09:25:50.133715  332015 system_pods.go:61] "kube-proxy-lqqmc" [81a61d2b-bb1b-46d7-9acc-035150e8061b] Running
	I1123 09:25:50.133721  332015 system_pods.go:61] "kube-scheduler-ha-857095" [0598722f-31ac-4529-8b00-94c9bccf8255] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:25:50.133732  332015 system_pods.go:61] "kube-scheduler-ha-857095-m02" [0d16a804-69c1-47f1-b32c-3b35f950765f] Running
	I1123 09:25:50.133737  332015 system_pods.go:61] "kube-scheduler-ha-857095-m03" [aaf4d61f-0ec3-4e06-912a-a87fc3ab3cdb] Running
	I1123 09:25:50.133741  332015 system_pods.go:61] "kube-vip-ha-857095" [41b5690c-90a6-4557-9e9c-fcb76fe0c548] Running
	I1123 09:25:50.133753  332015 system_pods.go:61] "kube-vip-ha-857095-m02" [9c7a58ce-d823-401a-9695-36a0b87ab3ca] Running
	I1123 09:25:50.133757  332015 system_pods.go:61] "kube-vip-ha-857095-m03" [3830c657-5386-4214-a319-d42e19a40c12] Running
	I1123 09:25:50.133761  332015 system_pods.go:61] "storage-provisioner" [fd6347d8-5602-4a34-875b-811bc8ea2bc2] Running
	I1123 09:25:50.133772  332015 system_pods.go:74] duration metric: took 99.565974ms to wait for pod list to return data ...
	I1123 09:25:50.133785  332015 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:25:50.235662  332015 default_sa.go:45] found service account: "default"
	I1123 09:25:50.235698  332015 default_sa.go:55] duration metric: took 101.906307ms for default service account to be created ...
	I1123 09:25:50.235710  332015 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:25:50.276224  332015 system_pods.go:86] 26 kube-system pods found
	I1123 09:25:50.276258  332015 system_pods.go:89] "coredns-66bc5c9577-gqskt" [9ec3e73a-4033-41ae-927a-50584a3e9653] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:25:50.276269  332015 system_pods.go:89] "coredns-66bc5c9577-kqvhl" [bcbbf58b-9d2d-4a51-b4c1-bfec16447df5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:25:50.276284  332015 system_pods.go:89] "etcd-ha-857095" [3eaffe71-9ce6-4a9b-8530-1de6a4ec8773] Running
	I1123 09:25:50.276290  332015 system_pods.go:89] "etcd-ha-857095-m02" [5f8628c9-5725-4ca9-9622-b42a9b63c833] Running
	I1123 09:25:50.276295  332015 system_pods.go:89] "etcd-ha-857095-m03" [2ec71863-ebd8-45ca-9f19-707503671154] Running
	I1123 09:25:50.276300  332015 system_pods.go:89] "kindnet-8bs9t" [d9dee210-2075-4095-8540-c13c401e5a68] Running
	I1123 09:25:50.276308  332015 system_pods.go:89] "kindnet-ls8hm" [b7c7ef9d-ebdd-4bd4-97e6-595b84787117] Running
	I1123 09:25:50.276314  332015 system_pods.go:89] "kindnet-r7p2c" [a4f419f5-ecbc-48e6-8f98-732c4ac5a977] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:25:50.276328  332015 system_pods.go:89] "kindnet-v5cch" [4bfed9c2-b321-43a0-a18b-c867696cf4cb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:25:50.276336  332015 system_pods.go:89] "kube-apiserver-ha-857095" [697606bd-c111-4922-adda-6902a7f40915] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:25:50.276345  332015 system_pods.go:89] "kube-apiserver-ha-857095-m02" [8516bae2-f830-4a82-aa30-dbd7bf657b52] Running
	I1123 09:25:50.276356  332015 system_pods.go:89] "kube-apiserver-ha-857095-m03" [9f6f5d7d-9bba-4b26-b928-05119bbc98af] Running
	I1123 09:25:50.276368  332015 system_pods.go:89] "kube-controller-manager-ha-857095" [026d1873-0078-4c87-a9c1-b5a615844bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:25:50.276374  332015 system_pods.go:89] "kube-controller-manager-ha-857095-m02" [51f4d1ee-3b47-49f2-907e-68598e7d88e1] Running
	I1123 09:25:50.276389  332015 system_pods.go:89] "kube-controller-manager-ha-857095-m03" [234e7d83-1430-4ee4-91e4-73bf5e7221dc] Running
	I1123 09:25:50.276395  332015 system_pods.go:89] "kube-proxy-275zc" [b46e4648-46c6-4f04-85bc-bbfd4aedc821] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:25:50.276399  332015 system_pods.go:89] "kube-proxy-6k46z" [f2387038-f806-4417-961a-cf4390f4b4a5] Running
	I1123 09:25:50.276405  332015 system_pods.go:89] "kube-proxy-9qgbr" [a03beba1-4074-45e0-a3a0-a4cf0917b9a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:25:50.276409  332015 system_pods.go:89] "kube-proxy-lqqmc" [81a61d2b-bb1b-46d7-9acc-035150e8061b] Running
	I1123 09:25:50.276418  332015 system_pods.go:89] "kube-scheduler-ha-857095" [0598722f-31ac-4529-8b00-94c9bccf8255] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:25:50.276439  332015 system_pods.go:89] "kube-scheduler-ha-857095-m02" [0d16a804-69c1-47f1-b32c-3b35f950765f] Running
	I1123 09:25:50.276443  332015 system_pods.go:89] "kube-scheduler-ha-857095-m03" [aaf4d61f-0ec3-4e06-912a-a87fc3ab3cdb] Running
	I1123 09:25:50.276448  332015 system_pods.go:89] "kube-vip-ha-857095" [41b5690c-90a6-4557-9e9c-fcb76fe0c548] Running
	I1123 09:25:50.276452  332015 system_pods.go:89] "kube-vip-ha-857095-m02" [9c7a58ce-d823-401a-9695-36a0b87ab3ca] Running
	I1123 09:25:50.276459  332015 system_pods.go:89] "kube-vip-ha-857095-m03" [3830c657-5386-4214-a319-d42e19a40c12] Running
	I1123 09:25:50.276463  332015 system_pods.go:89] "storage-provisioner" [fd6347d8-5602-4a34-875b-811bc8ea2bc2] Running
	I1123 09:25:50.276469  332015 system_pods.go:126] duration metric: took 40.753939ms to wait for k8s-apps to be running ...
	I1123 09:25:50.276477  332015 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:25:50.276538  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:25:50.296938  332015 system_svc.go:56] duration metric: took 20.452092ms WaitForService to wait for kubelet
	I1123 09:25:50.296975  332015 kubeadm.go:587] duration metric: took 4.73364502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:25:50.296993  332015 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:25:50.317399  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:25:50.317469  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:25:50.317482  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:25:50.317487  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:25:50.317491  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:25:50.317495  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:25:50.317499  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:25:50.317511  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:25:50.317521  332015 node_conditions.go:105] duration metric: took 20.520835ms to run NodePressure ...
	I1123 09:25:50.317534  332015 start.go:242] waiting for startup goroutines ...
	I1123 09:25:50.317564  332015 start.go:256] writing updated cluster config ...
	I1123 09:25:50.323143  332015 out.go:203] 
	I1123 09:25:50.326401  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:50.326524  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:25:50.330156  332015 out.go:179] * Starting "ha-857095-m03" control-plane node in "ha-857095" cluster
	I1123 09:25:50.334438  332015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:25:50.338097  332015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:25:50.340607  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:25:50.340654  332015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:25:50.340838  332015 cache.go:65] Caching tarball of preloaded images
	I1123 09:25:50.340926  332015 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:25:50.340940  332015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:25:50.341072  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:25:50.370726  332015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:25:50.370752  332015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:25:50.370766  332015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:25:50.370789  332015 start.go:360] acquireMachinesLock for ha-857095-m03: {Name:mk6acf38570d035eb912e1d2f030641425a2af59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:25:50.370845  332015 start.go:364] duration metric: took 36.226µs to acquireMachinesLock for "ha-857095-m03"
	I1123 09:25:50.370869  332015 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:25:50.370875  332015 fix.go:54] fixHost starting: m03
	I1123 09:25:50.371144  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m03 --format={{.State.Status}}
	I1123 09:25:50.400510  332015 fix.go:112] recreateIfNeeded on ha-857095-m03: state=Stopped err=<nil>
	W1123 09:25:50.400540  332015 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:25:50.404410  332015 out.go:252] * Restarting existing docker container for "ha-857095-m03" ...
	I1123 09:25:50.404500  332015 cli_runner.go:164] Run: docker start ha-857095-m03
	I1123 09:25:50.796227  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m03 --format={{.State.Status}}
	I1123 09:25:50.840417  332015 kic.go:430] container "ha-857095-m03" state is running.
	I1123 09:25:50.840758  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m03
	I1123 09:25:50.894166  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:25:50.894416  332015 machine.go:94] provisionDockerMachine start ...
	I1123 09:25:50.894479  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:50.924984  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:25:50.925293  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1123 09:25:50.925301  332015 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:25:50.926098  332015 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 09:25:54.161789  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m03
	
	I1123 09:25:54.161877  332015 ubuntu.go:182] provisioning hostname "ha-857095-m03"
	I1123 09:25:54.161974  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:54.189870  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:25:54.190176  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1123 09:25:54.190186  332015 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-857095-m03 && echo "ha-857095-m03" | sudo tee /etc/hostname
	I1123 09:25:54.416867  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m03
	
	I1123 09:25:54.416961  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:54.451607  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:25:54.451922  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1123 09:25:54.451938  332015 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857095-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857095-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857095-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:25:54.684288  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:25:54.684344  332015 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:25:54.684362  332015 ubuntu.go:190] setting up certificates
	I1123 09:25:54.684372  332015 provision.go:84] configureAuth start
	I1123 09:25:54.684450  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m03
	I1123 09:25:54.708108  332015 provision.go:143] copyHostCerts
	I1123 09:25:54.708151  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:25:54.708186  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:25:54.708192  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:25:54.708273  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:25:54.708351  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:25:54.708368  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:25:54.708373  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:25:54.708399  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:25:54.708439  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:25:54.708455  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:25:54.708459  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:25:54.708484  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:25:54.708532  332015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.ha-857095-m03 san=[127.0.0.1 192.168.49.4 ha-857095-m03 localhost minikube]
	I1123 09:25:54.877285  332015 provision.go:177] copyRemoteCerts
	I1123 09:25:54.877362  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:25:54.877428  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:54.897354  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:55.052011  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1123 09:25:55.052077  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 09:25:55.110347  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1123 09:25:55.110418  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:25:55.160630  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1123 09:25:55.160706  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:25:55.206791  332015 provision.go:87] duration metric: took 522.405111ms to configureAuth
	I1123 09:25:55.206859  332015 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:25:55.207143  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:55.207288  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:55.231475  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:25:55.231787  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1123 09:25:55.231807  332015 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:25:55.818269  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:25:55.818294  332015 machine.go:97] duration metric: took 4.923860996s to provisionDockerMachine
	I1123 09:25:55.818307  332015 start.go:293] postStartSetup for "ha-857095-m03" (driver="docker")
	I1123 09:25:55.818318  332015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:25:55.818419  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:25:55.818465  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:55.838899  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:55.945315  332015 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:25:55.948680  332015 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:25:55.948711  332015 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:25:55.948723  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:25:55.948779  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:25:55.948855  332015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:25:55.948865  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /etc/ssl/certs/2849042.pem
	I1123 09:25:55.948961  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:25:55.956253  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:25:55.975283  332015 start.go:296] duration metric: took 156.955332ms for postStartSetup
	I1123 09:25:55.975413  332015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:25:55.975489  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:55.995364  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:56.102831  332015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:25:56.108114  332015 fix.go:56] duration metric: took 5.737232288s for fixHost
	I1123 09:25:56.108138  332015 start.go:83] releasing machines lock for "ha-857095-m03", held for 5.737279936s
	I1123 09:25:56.108206  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m03
	I1123 09:25:56.129684  332015 out.go:179] * Found network options:
	I1123 09:25:56.132653  332015 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1123 09:25:56.138460  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:25:56.138495  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:25:56.138520  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:25:56.138534  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	I1123 09:25:56.138602  332015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:25:56.138645  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:56.138894  332015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:25:56.138945  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:56.160178  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:56.178028  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:56.510498  332015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:25:56.519235  332015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:25:56.519358  332015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:25:56.532899  332015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:25:56.532974  332015 start.go:496] detecting cgroup driver to use...
	I1123 09:25:56.533021  332015 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:25:56.533095  332015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:25:56.563353  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:25:56.582194  332015 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:25:56.582307  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:25:56.604304  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:25:56.624857  332015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:25:56.880123  332015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:25:57.130771  332015 docker.go:234] disabling docker service ...
	I1123 09:25:57.130892  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:25:57.155366  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:25:57.181953  332015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:25:57.470517  332015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:25:57.703602  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:25:57.722751  332015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:25:57.754960  332015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:25:57.755080  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.788981  332015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:25:57.789102  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.805042  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.815549  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.830253  332015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:25:57.840395  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.853329  332015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.867204  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.882910  332015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:25:57.895568  332015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:25:57.910955  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:25:58.202730  332015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:25:59.499296  332015 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.296482203s)
	I1123 09:25:59.499324  332015 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:25:59.499400  332015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:25:59.503386  332015 start.go:564] Will wait 60s for crictl version
	I1123 09:25:59.503504  332015 ssh_runner.go:195] Run: which crictl
	I1123 09:25:59.507281  332015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:25:59.537756  332015 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:25:59.537841  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:25:59.571202  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:25:59.604176  332015 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:25:59.607193  332015 out.go:179]   - env NO_PROXY=192.168.49.2
	I1123 09:25:59.610134  332015 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1123 09:25:59.613170  332015 cli_runner.go:164] Run: docker network inspect ha-857095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:25:59.630136  332015 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:25:59.634914  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:25:59.644730  332015 mustload.go:66] Loading cluster: ha-857095
	I1123 09:25:59.644972  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:59.645239  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:25:59.662875  332015 host.go:66] Checking if "ha-857095" exists ...
	I1123 09:25:59.663179  332015 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095 for IP: 192.168.49.4
	I1123 09:25:59.663188  332015 certs.go:195] generating shared ca certs ...
	I1123 09:25:59.663201  332015 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:25:59.663327  332015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:25:59.663365  332015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:25:59.663372  332015 certs.go:257] generating profile certs ...
	I1123 09:25:59.663446  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key
	I1123 09:25:59.663522  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.283ff493
	I1123 09:25:59.663567  332015 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key
	I1123 09:25:59.663575  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1123 09:25:59.663589  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1123 09:25:59.663601  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1123 09:25:59.663612  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1123 09:25:59.663621  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1123 09:25:59.663633  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1123 09:25:59.663644  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1123 09:25:59.663654  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1123 09:25:59.663702  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:25:59.663734  332015 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:25:59.663742  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:25:59.663771  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:25:59.663797  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:25:59.663820  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:25:59.663870  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:25:59.663898  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem -> /usr/share/ca-certificates/284904.pem
	I1123 09:25:59.663912  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /usr/share/ca-certificates/2849042.pem
	I1123 09:25:59.663923  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:59.663978  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:25:59.689941  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:25:59.793738  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1123 09:25:59.797235  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1123 09:25:59.805196  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1123 09:25:59.808653  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1123 09:25:59.816623  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1123 09:25:59.819984  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1123 09:25:59.828037  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1123 09:25:59.831812  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1123 09:25:59.839915  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1123 09:25:59.843477  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1123 09:25:59.851542  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1123 09:25:59.855295  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1123 09:25:59.863949  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:25:59.885646  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:25:59.904286  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:25:59.924769  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:25:59.944702  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:25:59.963610  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:25:59.984488  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:26:00.117342  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:26:00.182322  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:26:00.220393  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:26:00.303614  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:26:00.335892  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1123 09:26:00.355160  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1123 09:26:00.374206  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1123 09:26:00.392709  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1123 09:26:00.409109  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1123 09:26:00.425117  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1123 09:26:00.439914  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1123 09:26:00.464465  332015 ssh_runner.go:195] Run: openssl version
	I1123 09:26:00.472524  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:26:00.483656  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:26:00.487711  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:26:00.487827  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:26:00.532783  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:26:00.543887  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:26:00.551979  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:26:00.555635  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:26:00.555720  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:26:00.597611  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:26:00.605512  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:26:00.613913  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:00.617669  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:00.617766  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:00.660921  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:26:00.669960  332015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:26:00.674647  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:26:00.723335  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:26:00.764258  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:26:00.804912  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:26:00.845808  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:26:00.888833  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:26:00.931554  332015 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1123 09:26:00.931679  332015 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-857095-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:26:00.931715  332015 kube-vip.go:115] generating kube-vip config ...
	I1123 09:26:00.931766  332015 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1123 09:26:00.944231  332015 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:26:00.944300  332015 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1123 09:26:00.944366  332015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:26:00.952127  332015 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:26:00.952218  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1123 09:26:00.959898  332015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 09:26:00.974683  332015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:26:00.988424  332015 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1123 09:26:01.007528  332015 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1123 09:26:01.011388  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:26:01.021832  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:01.167574  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:26:01.186465  332015 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:26:01.187024  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:01.191851  332015 out.go:179] * Verifying Kubernetes components...
	I1123 09:26:01.194848  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:01.336348  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:26:01.352032  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1123 09:26:01.352169  332015 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1123 09:26:01.352449  332015 node_ready.go:35] waiting up to 6m0s for node "ha-857095-m03" to be "Ready" ...
	I1123 09:26:01.355787  332015 node_ready.go:49] node "ha-857095-m03" is "Ready"
	I1123 09:26:01.355816  332015 node_ready.go:38] duration metric: took 3.32939ms for node "ha-857095-m03" to be "Ready" ...
	I1123 09:26:01.355830  332015 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:26:01.355885  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:01.856392  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:02.356689  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:02.856504  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:03.356575  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:03.856101  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:04.356803  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:04.856202  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:05.356951  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:05.856542  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:06.356037  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:06.856518  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:07.356012  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:07.856915  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:08.356635  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:08.856266  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:09.356016  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:09.375020  332015 api_server.go:72] duration metric: took 8.188500317s to wait for apiserver process to appear ...
	I1123 09:26:09.375044  332015 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:26:09.375064  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:26:09.384535  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 09:26:09.386418  332015 api_server.go:141] control plane version: v1.34.1
	I1123 09:26:09.386440  332015 api_server.go:131] duration metric: took 11.388651ms to wait for apiserver health ...
	I1123 09:26:09.386448  332015 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:26:09.406325  332015 system_pods.go:59] 26 kube-system pods found
	I1123 09:26:09.407759  332015 system_pods.go:61] "coredns-66bc5c9577-gqskt" [9ec3e73a-4033-41ae-927a-50584a3e9653] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:26:09.407805  332015 system_pods.go:61] "coredns-66bc5c9577-kqvhl" [bcbbf58b-9d2d-4a51-b4c1-bfec16447df5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:26:09.407831  332015 system_pods.go:61] "etcd-ha-857095" [3eaffe71-9ce6-4a9b-8530-1de6a4ec8773] Running
	I1123 09:26:09.407852  332015 system_pods.go:61] "etcd-ha-857095-m02" [5f8628c9-5725-4ca9-9622-b42a9b63c833] Running
	I1123 09:26:09.407873  332015 system_pods.go:61] "etcd-ha-857095-m03" [2ec71863-ebd8-45ca-9f19-707503671154] Running
	I1123 09:26:09.407906  332015 system_pods.go:61] "kindnet-8bs9t" [d9dee210-2075-4095-8540-c13c401e5a68] Running
	I1123 09:26:09.407931  332015 system_pods.go:61] "kindnet-ls8hm" [b7c7ef9d-ebdd-4bd4-97e6-595b84787117] Running
	I1123 09:26:09.407950  332015 system_pods.go:61] "kindnet-r7p2c" [a4f419f5-ecbc-48e6-8f98-732c4ac5a977] Running
	I1123 09:26:09.407971  332015 system_pods.go:61] "kindnet-v5cch" [4bfed9c2-b321-43a0-a18b-c867696cf4cb] Running
	I1123 09:26:09.407992  332015 system_pods.go:61] "kube-apiserver-ha-857095" [697606bd-c111-4922-adda-6902a7f40915] Running
	I1123 09:26:09.408020  332015 system_pods.go:61] "kube-apiserver-ha-857095-m02" [8516bae2-f830-4a82-aa30-dbd7bf657b52] Running
	I1123 09:26:09.408046  332015 system_pods.go:61] "kube-apiserver-ha-857095-m03" [9f6f5d7d-9bba-4b26-b928-05119bbc98af] Running
	I1123 09:26:09.408073  332015 system_pods.go:61] "kube-controller-manager-ha-857095" [026d1873-0078-4c87-a9c1-b5a615844bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:26:09.408095  332015 system_pods.go:61] "kube-controller-manager-ha-857095-m02" [51f4d1ee-3b47-49f2-907e-68598e7d88e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:26:09.408128  332015 system_pods.go:61] "kube-controller-manager-ha-857095-m03" [234e7d83-1430-4ee4-91e4-73bf5e7221dc] Running
	I1123 09:26:09.408158  332015 system_pods.go:61] "kube-proxy-275zc" [b46e4648-46c6-4f04-85bc-bbfd4aedc821] Running
	I1123 09:26:09.408180  332015 system_pods.go:61] "kube-proxy-6k46z" [f2387038-f806-4417-961a-cf4390f4b4a5] Running
	I1123 09:26:09.408201  332015 system_pods.go:61] "kube-proxy-9qgbr" [a03beba1-4074-45e0-a3a0-a4cf0917b9a8] Running
	I1123 09:26:09.408237  332015 system_pods.go:61] "kube-proxy-lqqmc" [81a61d2b-bb1b-46d7-9acc-035150e8061b] Running
	I1123 09:26:09.408274  332015 system_pods.go:61] "kube-scheduler-ha-857095" [0598722f-31ac-4529-8b00-94c9bccf8255] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:26:09.408299  332015 system_pods.go:61] "kube-scheduler-ha-857095-m02" [0d16a804-69c1-47f1-b32c-3b35f950765f] Running
	I1123 09:26:09.408319  332015 system_pods.go:61] "kube-scheduler-ha-857095-m03" [aaf4d61f-0ec3-4e06-912a-a87fc3ab3cdb] Running
	I1123 09:26:09.408352  332015 system_pods.go:61] "kube-vip-ha-857095" [41b5690c-90a6-4557-9e9c-fcb76fe0c548] Running
	I1123 09:26:09.408380  332015 system_pods.go:61] "kube-vip-ha-857095-m02" [9c7a58ce-d823-401a-9695-36a0b87ab3ca] Running
	I1123 09:26:09.408432  332015 system_pods.go:61] "kube-vip-ha-857095-m03" [3830c657-5386-4214-a319-d42e19a40c12] Running
	I1123 09:26:09.408457  332015 system_pods.go:61] "storage-provisioner" [fd6347d8-5602-4a34-875b-811bc8ea2bc2] Running
	I1123 09:26:09.408480  332015 system_pods.go:74] duration metric: took 22.024671ms to wait for pod list to return data ...
	I1123 09:26:09.408503  332015 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:26:09.412561  332015 default_sa.go:45] found service account: "default"
	I1123 09:26:09.412632  332015 default_sa.go:55] duration metric: took 4.107335ms for default service account to be created ...
	I1123 09:26:09.412660  332015 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:26:09.420811  332015 system_pods.go:86] 26 kube-system pods found
	I1123 09:26:09.420908  332015 system_pods.go:89] "coredns-66bc5c9577-gqskt" [9ec3e73a-4033-41ae-927a-50584a3e9653] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:26:09.420935  332015 system_pods.go:89] "coredns-66bc5c9577-kqvhl" [bcbbf58b-9d2d-4a51-b4c1-bfec16447df5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:26:09.420976  332015 system_pods.go:89] "etcd-ha-857095" [3eaffe71-9ce6-4a9b-8530-1de6a4ec8773] Running
	I1123 09:26:09.421010  332015 system_pods.go:89] "etcd-ha-857095-m02" [5f8628c9-5725-4ca9-9622-b42a9b63c833] Running
	I1123 09:26:09.421033  332015 system_pods.go:89] "etcd-ha-857095-m03" [2ec71863-ebd8-45ca-9f19-707503671154] Running
	I1123 09:26:09.421055  332015 system_pods.go:89] "kindnet-8bs9t" [d9dee210-2075-4095-8540-c13c401e5a68] Running
	I1123 09:26:09.421089  332015 system_pods.go:89] "kindnet-ls8hm" [b7c7ef9d-ebdd-4bd4-97e6-595b84787117] Running
	I1123 09:26:09.421118  332015 system_pods.go:89] "kindnet-r7p2c" [a4f419f5-ecbc-48e6-8f98-732c4ac5a977] Running
	I1123 09:26:09.421158  332015 system_pods.go:89] "kindnet-v5cch" [4bfed9c2-b321-43a0-a18b-c867696cf4cb] Running
	I1123 09:26:09.421187  332015 system_pods.go:89] "kube-apiserver-ha-857095" [697606bd-c111-4922-adda-6902a7f40915] Running
	I1123 09:26:09.421211  332015 system_pods.go:89] "kube-apiserver-ha-857095-m02" [8516bae2-f830-4a82-aa30-dbd7bf657b52] Running
	I1123 09:26:09.421233  332015 system_pods.go:89] "kube-apiserver-ha-857095-m03" [9f6f5d7d-9bba-4b26-b928-05119bbc98af] Running
	I1123 09:26:09.421274  332015 system_pods.go:89] "kube-controller-manager-ha-857095" [026d1873-0078-4c87-a9c1-b5a615844bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:26:09.421303  332015 system_pods.go:89] "kube-controller-manager-ha-857095-m02" [51f4d1ee-3b47-49f2-907e-68598e7d88e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:26:09.421325  332015 system_pods.go:89] "kube-controller-manager-ha-857095-m03" [234e7d83-1430-4ee4-91e4-73bf5e7221dc] Running
	I1123 09:26:09.421348  332015 system_pods.go:89] "kube-proxy-275zc" [b46e4648-46c6-4f04-85bc-bbfd4aedc821] Running
	I1123 09:26:09.421385  332015 system_pods.go:89] "kube-proxy-6k46z" [f2387038-f806-4417-961a-cf4390f4b4a5] Running
	I1123 09:26:09.421421  332015 system_pods.go:89] "kube-proxy-9qgbr" [a03beba1-4074-45e0-a3a0-a4cf0917b9a8] Running
	I1123 09:26:09.421441  332015 system_pods.go:89] "kube-proxy-lqqmc" [81a61d2b-bb1b-46d7-9acc-035150e8061b] Running
	I1123 09:26:09.421463  332015 system_pods.go:89] "kube-scheduler-ha-857095" [0598722f-31ac-4529-8b00-94c9bccf8255] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:26:09.421494  332015 system_pods.go:89] "kube-scheduler-ha-857095-m02" [0d16a804-69c1-47f1-b32c-3b35f950765f] Running
	I1123 09:26:09.421521  332015 system_pods.go:89] "kube-scheduler-ha-857095-m03" [aaf4d61f-0ec3-4e06-912a-a87fc3ab3cdb] Running
	I1123 09:26:09.421541  332015 system_pods.go:89] "kube-vip-ha-857095" [41b5690c-90a6-4557-9e9c-fcb76fe0c548] Running
	I1123 09:26:09.421562  332015 system_pods.go:89] "kube-vip-ha-857095-m02" [9c7a58ce-d823-401a-9695-36a0b87ab3ca] Running
	I1123 09:26:09.421595  332015 system_pods.go:89] "kube-vip-ha-857095-m03" [3830c657-5386-4214-a319-d42e19a40c12] Running
	I1123 09:26:09.421621  332015 system_pods.go:89] "storage-provisioner" [fd6347d8-5602-4a34-875b-811bc8ea2bc2] Running
	I1123 09:26:09.421644  332015 system_pods.go:126] duration metric: took 8.958012ms to wait for k8s-apps to be running ...
	I1123 09:26:09.421666  332015 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:26:09.421753  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:26:09.436477  332015 system_svc.go:56] duration metric: took 14.802398ms WaitForService to wait for kubelet
	I1123 09:26:09.436515  332015 kubeadm.go:587] duration metric: took 8.250000324s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:26:09.436534  332015 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:26:09.440490  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:09.440519  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:09.440532  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:09.440537  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:09.440549  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:09.440555  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:09.440563  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:09.440568  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:09.440578  332015 node_conditions.go:105] duration metric: took 4.039042ms to run NodePressure ...
	I1123 09:26:09.440592  332015 start.go:242] waiting for startup goroutines ...
	I1123 09:26:09.440627  332015 start.go:256] writing updated cluster config ...
	I1123 09:26:09.444444  332015 out.go:203] 
	I1123 09:26:09.447845  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:09.447976  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:26:09.451331  332015 out.go:179] * Starting "ha-857095-m04" worker node in "ha-857095" cluster
	I1123 09:26:09.454181  332015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:26:09.457128  332015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:26:09.459981  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:26:09.460044  332015 cache.go:65] Caching tarball of preloaded images
	I1123 09:26:09.460053  332015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:26:09.460162  332015 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:26:09.460183  332015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:26:09.460319  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:26:09.487056  332015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:26:09.487075  332015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:26:09.487099  332015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:26:09.487126  332015 start.go:360] acquireMachinesLock for ha-857095-m04: {Name:mkc778064e426bc743bab6e8fad34bbaae40e782 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:26:09.487176  332015 start.go:364] duration metric: took 35.471µs to acquireMachinesLock for "ha-857095-m04"
	I1123 09:26:09.487195  332015 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:26:09.487200  332015 fix.go:54] fixHost starting: m04
	I1123 09:26:09.487451  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m04 --format={{.State.Status}}
	I1123 09:26:09.507899  332015 fix.go:112] recreateIfNeeded on ha-857095-m04: state=Stopped err=<nil>
	W1123 09:26:09.507924  332015 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:26:09.511107  332015 out.go:252] * Restarting existing docker container for "ha-857095-m04" ...
	I1123 09:26:09.511253  332015 cli_runner.go:164] Run: docker start ha-857095-m04
	I1123 09:26:09.866032  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m04 --format={{.State.Status}}
	I1123 09:26:09.896315  332015 kic.go:430] container "ha-857095-m04" state is running.
	I1123 09:26:09.896669  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m04
	I1123 09:26:09.920856  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:26:09.921084  332015 machine.go:94] provisionDockerMachine start ...
	I1123 09:26:09.921148  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:09.953275  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:26:09.953746  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1123 09:26:09.953766  332015 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:26:09.954414  332015 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 09:26:13.177535  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m04
	
	I1123 09:26:13.177568  332015 ubuntu.go:182] provisioning hostname "ha-857095-m04"
	I1123 09:26:13.177640  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:13.208850  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:26:13.209159  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1123 09:26:13.209176  332015 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-857095-m04 && echo "ha-857095-m04" | sudo tee /etc/hostname
	I1123 09:26:13.425765  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m04
	
	I1123 09:26:13.425859  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:13.460720  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:26:13.461034  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1123 09:26:13.461061  332015 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857095-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857095-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857095-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:26:13.666205  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:26:13.666234  332015 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:26:13.666252  332015 ubuntu.go:190] setting up certificates
	I1123 09:26:13.666263  332015 provision.go:84] configureAuth start
	I1123 09:26:13.666323  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m04
	I1123 09:26:13.699046  332015 provision.go:143] copyHostCerts
	I1123 09:26:13.699100  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:26:13.699136  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:26:13.699149  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:26:13.699242  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:26:13.699332  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:26:13.699356  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:26:13.699365  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:26:13.699394  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:26:13.699443  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:26:13.699466  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:26:13.699475  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:26:13.699504  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:26:13.699558  332015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.ha-857095-m04 san=[127.0.0.1 192.168.49.5 ha-857095-m04 localhost minikube]
	I1123 09:26:13.947128  332015 provision.go:177] copyRemoteCerts
	I1123 09:26:13.947199  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:26:13.947245  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:13.964666  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:14.108546  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1123 09:26:14.108614  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:26:14.147222  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1123 09:26:14.147298  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:26:14.174245  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1123 09:26:14.174323  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:26:14.202367  332015 provision.go:87] duration metric: took 536.081268ms to configureAuth
	I1123 09:26:14.202398  332015 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:26:14.202692  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:14.202823  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:14.228826  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:26:14.229151  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1123 09:26:14.229165  332015 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:26:14.698077  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:26:14.698147  332015 machine.go:97] duration metric: took 4.777046451s to provisionDockerMachine
	I1123 09:26:14.698176  332015 start.go:293] postStartSetup for "ha-857095-m04" (driver="docker")
	I1123 09:26:14.698221  332015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:26:14.698305  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:26:14.698371  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:14.723686  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:14.851030  332015 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:26:14.858337  332015 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:26:14.858362  332015 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:26:14.858374  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:26:14.858433  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:26:14.858508  332015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:26:14.858515  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /etc/ssl/certs/2849042.pem
	I1123 09:26:14.858611  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:26:14.870806  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:26:14.904225  332015 start.go:296] duration metric: took 206.013245ms for postStartSetup
	I1123 09:26:14.904312  332015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:26:14.904357  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:14.925549  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:15.048457  332015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:26:15.064072  332015 fix.go:56] duration metric: took 5.57686319s for fixHost
	I1123 09:26:15.064101  332015 start.go:83] releasing machines lock for "ha-857095-m04", held for 5.576912749s
	I1123 09:26:15.064189  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m04
	I1123 09:26:15.099935  332015 out.go:179] * Found network options:
	I1123 09:26:15.102733  332015 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1123 09:26:15.105537  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105581  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105592  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105615  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105625  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105635  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	I1123 09:26:15.105709  332015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:26:15.105751  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:15.106052  332015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:26:15.106106  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:15.139318  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:15.143260  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:15.438462  332015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:26:15.444861  332015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:26:15.444936  332015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:26:15.465823  332015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:26:15.465847  332015 start.go:496] detecting cgroup driver to use...
	I1123 09:26:15.465876  332015 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:26:15.465925  332015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:26:15.496588  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:26:15.514577  332015 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:26:15.514673  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:26:15.534950  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:26:15.548709  332015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:26:15.754867  332015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:26:15.954809  332015 docker.go:234] disabling docker service ...
	I1123 09:26:15.954903  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:26:15.979986  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:26:15.995201  332015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:26:16.195305  332015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:26:16.373235  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:26:16.389735  332015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:26:16.410006  332015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:26:16.410174  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.419483  332015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:26:16.419592  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.428394  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.444114  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.463213  332015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:26:16.471981  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.480994  332015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.489302  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.498210  332015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:26:16.508001  332015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:26:16.516953  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:16.726052  332015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:26:16.986187  332015 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:26:16.986301  332015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:26:16.994949  332015 start.go:564] Will wait 60s for crictl version
	I1123 09:26:16.995057  332015 ssh_runner.go:195] Run: which crictl
	I1123 09:26:17.005848  332015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:26:17.068139  332015 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:26:17.068261  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:26:17.123372  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:26:17.173210  332015 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:26:17.176207  332015 out.go:179]   - env NO_PROXY=192.168.49.2
	I1123 09:26:17.179404  332015 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1123 09:26:17.182767  332015 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1123 09:26:17.185787  332015 cli_runner.go:164] Run: docker network inspect ha-857095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:26:17.204073  332015 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:26:17.207997  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:26:17.218002  332015 mustload.go:66] Loading cluster: ha-857095
	I1123 09:26:17.218249  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:17.218496  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:26:17.246745  332015 host.go:66] Checking if "ha-857095" exists ...
	I1123 09:26:17.247017  332015 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095 for IP: 192.168.49.5
	I1123 09:26:17.247024  332015 certs.go:195] generating shared ca certs ...
	I1123 09:26:17.247040  332015 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:26:17.247177  332015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:26:17.247217  332015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:26:17.247228  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1123 09:26:17.247241  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1123 09:26:17.247254  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1123 09:26:17.247265  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1123 09:26:17.247315  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:26:17.247346  332015 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:26:17.247354  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:26:17.247382  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:26:17.247406  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:26:17.247429  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:26:17.247473  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:26:17.247504  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /usr/share/ca-certificates/2849042.pem
	I1123 09:26:17.247517  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:17.247527  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem -> /usr/share/ca-certificates/284904.pem
	I1123 09:26:17.247544  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:26:17.302193  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:26:17.327160  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:26:17.353974  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:26:17.377204  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:26:17.403460  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:26:17.423323  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:26:17.448832  332015 ssh_runner.go:195] Run: openssl version
	I1123 09:26:17.456781  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:26:17.467249  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:26:17.472303  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:26:17.472418  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:26:17.523101  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:26:17.535534  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:26:17.546862  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:17.552603  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:17.552699  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:17.599146  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:26:17.610235  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:26:17.618699  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:26:17.623313  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:26:17.623432  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:26:17.676492  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:26:17.685680  332015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:26:17.690257  332015 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:26:17.690334  332015 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1123 09:26:17.690451  332015 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-857095-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:26:17.690571  332015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:26:17.699579  332015 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:26:17.699678  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1123 09:26:17.711806  332015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 09:26:17.726908  332015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:26:17.741366  332015 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1123 09:26:17.745929  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:26:17.756453  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:17.960408  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:26:17.989357  332015 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1123 09:26:17.989946  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:17.994531  332015 out.go:179] * Verifying Kubernetes components...
	I1123 09:26:17.998123  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:18.239793  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:26:18.262774  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1123 09:26:18.262843  332015 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1123 09:26:18.263099  332015 node_ready.go:35] waiting up to 6m0s for node "ha-857095-m04" to be "Ready" ...
	I1123 09:26:18.269812  332015 node_ready.go:49] node "ha-857095-m04" is "Ready"
	I1123 09:26:18.269839  332015 node_ready.go:38] duration metric: took 6.727383ms for node "ha-857095-m04" to be "Ready" ...
	I1123 09:26:18.269854  332015 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:26:18.269907  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:26:18.288660  332015 system_svc.go:56] duration metric: took 18.797608ms WaitForService to wait for kubelet
	I1123 09:26:18.288686  332015 kubeadm.go:587] duration metric: took 299.282478ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:26:18.288702  332015 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:26:18.292995  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:18.293021  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:18.293032  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:18.293037  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:18.293042  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:18.293046  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:18.293051  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:18.293055  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:18.293059  332015 node_conditions.go:105] duration metric: took 4.352482ms to run NodePressure ...
	I1123 09:26:18.293072  332015 start.go:242] waiting for startup goroutines ...
	I1123 09:26:18.293094  332015 start.go:256] writing updated cluster config ...
	I1123 09:26:18.293459  332015 ssh_runner.go:195] Run: rm -f paused
	I1123 09:26:18.297614  332015 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:26:18.298096  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:26:18.325623  332015 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gqskt" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:26:20.334064  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:22.832313  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:24.834199  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:27.335305  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:29.831965  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:31.861015  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	I1123 09:26:32.333037  332015 pod_ready.go:94] pod "coredns-66bc5c9577-gqskt" is "Ready"
	I1123 09:26:32.333066  332015 pod_ready.go:86] duration metric: took 14.007410196s for pod "coredns-66bc5c9577-gqskt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.333077  332015 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kqvhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.338930  332015 pod_ready.go:94] pod "coredns-66bc5c9577-kqvhl" is "Ready"
	I1123 09:26:32.338959  332015 pod_ready.go:86] duration metric: took 5.876773ms for pod "coredns-66bc5c9577-kqvhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.342889  332015 pod_ready.go:83] waiting for pod "etcd-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.354954  332015 pod_ready.go:94] pod "etcd-ha-857095" is "Ready"
	I1123 09:26:32.354982  332015 pod_ready.go:86] duration metric: took 12.06568ms for pod "etcd-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.354992  332015 pod_ready.go:83] waiting for pod "etcd-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.360908  332015 pod_ready.go:94] pod "etcd-ha-857095-m02" is "Ready"
	I1123 09:26:32.360988  332015 pod_ready.go:86] duration metric: took 5.989209ms for pod "etcd-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.361006  332015 pod_ready.go:83] waiting for pod "etcd-ha-857095-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.527237  332015 request.go:683] "Waited before sending request" delay="163.188719ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095-m03"
	I1123 09:26:32.531141  332015 pod_ready.go:94] pod "etcd-ha-857095-m03" is "Ready"
	I1123 09:26:32.531176  332015 pod_ready.go:86] duration metric: took 170.163678ms for pod "etcd-ha-857095-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.727633  332015 request.go:683] "Waited before sending request" delay="196.333255ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1123 09:26:32.731295  332015 pod_ready.go:83] waiting for pod "kube-apiserver-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.927721  332015 request.go:683] "Waited before sending request" delay="196.318551ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857095"
	I1123 09:26:33.127610  332015 request.go:683] "Waited before sending request" delay="196.351881ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	I1123 09:26:33.131377  332015 pod_ready.go:94] pod "kube-apiserver-ha-857095" is "Ready"
	I1123 09:26:33.131404  332015 pod_ready.go:86] duration metric: took 400.08428ms for pod "kube-apiserver-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:33.131415  332015 pod_ready.go:83] waiting for pod "kube-apiserver-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:33.326734  332015 request.go:683] "Waited before sending request" delay="195.246384ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857095-m02"
	I1123 09:26:33.527259  332015 request.go:683] "Waited before sending request" delay="197.325627ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095-m02"
	I1123 09:26:33.531408  332015 pod_ready.go:94] pod "kube-apiserver-ha-857095-m02" is "Ready"
	I1123 09:26:33.531476  332015 pod_ready.go:86] duration metric: took 400.053592ms for pod "kube-apiserver-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:33.531510  332015 pod_ready.go:83] waiting for pod "kube-apiserver-ha-857095-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:33.726854  332015 request.go:683] "Waited before sending request" delay="195.24293ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857095-m03"
	I1123 09:26:33.927056  332015 request.go:683] "Waited before sending request" delay="196.304447ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095-m03"
	I1123 09:26:33.930670  332015 pod_ready.go:94] pod "kube-apiserver-ha-857095-m03" is "Ready"
	I1123 09:26:33.930738  332015 pod_ready.go:86] duration metric: took 399.207142ms for pod "kube-apiserver-ha-857095-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:34.127173  332015 request.go:683] "Waited before sending request" delay="196.311848ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1123 09:26:34.131888  332015 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:34.327442  332015 request.go:683] "Waited before sending request" delay="195.421664ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857095"
	I1123 09:26:34.526909  332015 request.go:683] "Waited before sending request" delay="195.121754ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	I1123 09:26:34.727795  332015 request.go:683] "Waited before sending request" delay="95.293534ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857095"
	I1123 09:26:34.926808  332015 request.go:683] "Waited before sending request" delay="192.288691ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	I1123 09:26:35.326671  332015 request.go:683] "Waited before sending request" delay="190.240931ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	I1123 09:26:35.727087  332015 request.go:683] "Waited before sending request" delay="90.213857ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	W1123 09:26:36.147664  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:38.639668  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:41.138106  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:43.639146  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:46.140223  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:48.638331  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:50.639066  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	I1123 09:26:51.639670  332015 pod_ready.go:94] pod "kube-controller-manager-ha-857095" is "Ready"
	I1123 09:26:51.639700  332015 pod_ready.go:86] duration metric: took 17.507743609s for pod "kube-controller-manager-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:51.639710  332015 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:26:53.652573  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:26:56.146503  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:26:58.147735  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:00.225967  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:02.647589  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:04.647752  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:07.153585  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:09.646738  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:12.145665  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:14.146292  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:16.646315  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:18.649017  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:20.649200  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:23.146376  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:25.147713  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:27.646124  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:29.646694  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:32.147157  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:34.647065  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:37.145928  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:39.149680  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:41.646227  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:43.648098  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:46.145963  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:48.146438  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:50.147240  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:52.647369  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:55.146780  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:57.649707  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:00.227209  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:02.646807  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:04.646959  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:07.146296  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:09.646937  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:11.648675  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:14.146286  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:16.646924  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:18.651084  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:21.147312  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:23.646217  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:25.646310  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:27.646958  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:30.146762  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:32.647802  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:35.146446  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:37.147422  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:39.647209  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:42.147709  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:44.646580  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:47.146583  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:49.646857  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:51.647231  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:54.147109  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:56.646513  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:58.646743  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:00.647210  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:03.146363  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:05.146523  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:07.147002  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:09.647653  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:12.146246  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:14.146687  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:16.157442  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:18.649348  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:21.146242  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:23.146404  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:25.646842  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:27.647159  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:29.647890  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:32.147183  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:34.647714  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:37.146420  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:39.146792  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:41.646176  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:43.646530  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:46.147106  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:48.149876  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:50.646833  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:53.145934  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:55.147151  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:57.646423  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:59.646898  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:01.651276  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:04.146294  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:06.150790  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:08.648014  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:11.147652  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:13.646274  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:16.147137  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	I1123 09:30:18.297999  332015 pod_ready.go:86] duration metric: took 3m26.658254957s for pod "kube-controller-manager-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:30:18.298033  332015 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1123 09:30:18.298048  332015 pod_ready.go:40] duration metric: took 4m0.000406947s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:30:18.301156  332015 out.go:203] 
	W1123 09:30:18.304209  332015 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1123 09:30:18.307045  332015 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-857095 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-857095
helpers_test.go:243: (dbg) docker inspect ha-857095:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8",
	        "Created": "2025-11-23T09:18:21.765330623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 332137,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:23:56.467420574Z",
	            "FinishedAt": "2025-11-23T09:23:55.842197436Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/hosts",
	        "LogPath": "/var/lib/docker/containers/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8-json.log",
	        "Name": "/ha-857095",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-857095:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-857095",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8",
	                "LowerDir": "/var/lib/docker/overlay2/7b0839d24d2a3baaaf22d9c15821d50414819cc142231fd0b30407a9910e5b2a-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b0839d24d2a3baaaf22d9c15821d50414819cc142231fd0b30407a9910e5b2a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b0839d24d2a3baaaf22d9c15821d50414819cc142231fd0b30407a9910e5b2a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b0839d24d2a3baaaf22d9c15821d50414819cc142231fd0b30407a9910e5b2a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-857095",
	                "Source": "/var/lib/docker/volumes/ha-857095/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-857095",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-857095",
	                "name.minikube.sigs.k8s.io": "ha-857095",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c0efb14adc07b4c286adcc93e164cef6836115ca98a2993e2ff3c5210cff68f1",
	            "SandboxKey": "/var/run/docker/netns/c0efb14adc07",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-857095": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:66:b1:d7:8d:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d56166f18c3a11f7c4d9e5d1ffa88fcabe405ba7af460096f6e964bfe85cc560",
	                    "EndpointID": "fb53906039e958dd0bfc9dec4873b4afafd8f1e971bdb75d9d0cf827c82fb8d3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-857095",
	                        "8497a55e0a4e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-857095 -n ha-857095
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 logs -n 25: (1.771403729s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-857095 cp ha-857095-m03:/home/docker/cp-test.txt ha-857095-m02:/home/docker/cp-test_ha-857095-m03_ha-857095-m02.txt               │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m02 sudo cat /home/docker/cp-test_ha-857095-m03_ha-857095-m02.txt                                         │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ cp      │ ha-857095 cp ha-857095-m03:/home/docker/cp-test.txt ha-857095-m04:/home/docker/cp-test_ha-857095-m03_ha-857095-m04.txt               │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m04 sudo cat /home/docker/cp-test_ha-857095-m03_ha-857095-m04.txt                                         │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ cp      │ ha-857095 cp testdata/cp-test.txt ha-857095-m04:/home/docker/cp-test.txt                                                             │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ cp      │ ha-857095 cp ha-857095-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1815903833/001/cp-test_ha-857095-m04.txt │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ cp      │ ha-857095 cp ha-857095-m04:/home/docker/cp-test.txt ha-857095:/home/docker/cp-test_ha-857095-m04_ha-857095.txt                       │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095 sudo cat /home/docker/cp-test_ha-857095-m04_ha-857095.txt                                                 │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ cp      │ ha-857095 cp ha-857095-m04:/home/docker/cp-test.txt ha-857095-m02:/home/docker/cp-test_ha-857095-m04_ha-857095-m02.txt               │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m02 sudo cat /home/docker/cp-test_ha-857095-m04_ha-857095-m02.txt                                         │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ cp      │ ha-857095 cp ha-857095-m04:/home/docker/cp-test.txt ha-857095-m03:/home/docker/cp-test_ha-857095-m04_ha-857095-m03.txt               │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m03 sudo cat /home/docker/cp-test_ha-857095-m04_ha-857095-m03.txt                                         │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ node    │ ha-857095 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ node    │ ha-857095 node start m02 --alsologtostderr -v 5                                                                                      │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:23 UTC │
	│ node    │ ha-857095 node list --alsologtostderr -v 5                                                                                           │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:23 UTC │                     │
	│ stop    │ ha-857095 stop --alsologtostderr -v 5                                                                                                │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:23 UTC │ 23 Nov 25 09:23 UTC │
	│ start   │ ha-857095 start --wait true --alsologtostderr -v 5                                                                                   │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:23 UTC │                     │
	│ node    │ ha-857095 node list --alsologtostderr -v 5                                                                                           │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:23:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:23:56.195666  332015 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:23:56.195782  332015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:23:56.195793  332015 out.go:374] Setting ErrFile to fd 2...
	I1123 09:23:56.195799  332015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:23:56.196022  332015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:23:56.196372  332015 out.go:368] Setting JSON to false
	I1123 09:23:56.197168  332015 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7585,"bootTime":1763882251,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 09:23:56.197241  332015 start.go:143] virtualization:  
	I1123 09:23:56.202491  332015 out.go:179] * [ha-857095] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:23:56.205469  332015 notify.go:221] Checking for updates...
	I1123 09:23:56.205985  332015 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:23:56.209103  332015 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:23:56.212257  332015 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:23:56.214935  332015 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 09:23:56.217823  332015 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:23:56.220754  332015 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:23:56.224090  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:23:56.224192  332015 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:23:56.248091  332015 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:23:56.248221  332015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:23:56.316560  332015 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-23 09:23:56.306152339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:23:56.316667  332015 docker.go:319] overlay module found
	I1123 09:23:56.319905  332015 out.go:179] * Using the docker driver based on existing profile
	I1123 09:23:56.322883  332015 start.go:309] selected driver: docker
	I1123 09:23:56.322910  332015 start.go:927] validating driver "docker" against &{Name:ha-857095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:23:56.323070  332015 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:23:56.323169  332015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:23:56.383495  332015 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-23 09:23:56.374562034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:23:56.383895  332015 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:23:56.383914  332015 cni.go:84] Creating CNI manager for ""
	I1123 09:23:56.383965  332015 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1123 09:23:56.384008  332015 start.go:353] cluster config:
	{Name:ha-857095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:23:56.387318  332015 out.go:179] * Starting "ha-857095" primary control-plane node in "ha-857095" cluster
	I1123 09:23:56.390204  332015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:23:56.393222  332015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:23:56.395941  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:23:56.395987  332015 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 09:23:56.395997  332015 cache.go:65] Caching tarball of preloaded images
	I1123 09:23:56.396063  332015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:23:56.396081  332015 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:23:56.396092  332015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:23:56.396244  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:23:56.413619  332015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:23:56.413643  332015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:23:56.413663  332015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:23:56.413694  332015 start.go:360] acquireMachinesLock for ha-857095: {Name:mk7ea4c3d6888276233865fa5f92414123c08091 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:23:56.413754  332015 start.go:364] duration metric: took 36.201µs to acquireMachinesLock for "ha-857095"
	I1123 09:23:56.413778  332015 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:23:56.413787  332015 fix.go:54] fixHost starting: 
	I1123 09:23:56.414049  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:23:56.430596  332015 fix.go:112] recreateIfNeeded on ha-857095: state=Stopped err=<nil>
	W1123 09:23:56.430627  332015 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:23:56.433965  332015 out.go:252] * Restarting existing docker container for "ha-857095" ...
	I1123 09:23:56.434061  332015 cli_runner.go:164] Run: docker start ha-857095
	I1123 09:23:56.669371  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:23:56.694309  332015 kic.go:430] container "ha-857095" state is running.
	I1123 09:23:56.694718  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095
	I1123 09:23:56.714939  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:23:56.715179  332015 machine.go:94] provisionDockerMachine start ...
	I1123 09:23:56.715249  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:23:56.739434  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:56.739774  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33182 <nil> <nil>}
	I1123 09:23:56.739790  332015 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:23:56.740583  332015 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45372->127.0.0.1:33182: read: connection reset by peer
	I1123 09:23:59.888928  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095
	
	I1123 09:23:59.888954  332015 ubuntu.go:182] provisioning hostname "ha-857095"
	I1123 09:23:59.889018  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:23:59.906579  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:59.906895  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33182 <nil> <nil>}
	I1123 09:23:59.906906  332015 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-857095 && echo "ha-857095" | sudo tee /etc/hostname
	I1123 09:24:00.143191  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095
	
	I1123 09:24:00.143304  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:00.200109  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:00.200444  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33182 <nil> <nil>}
	I1123 09:24:00.200460  332015 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857095/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:24:00.391079  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:24:00.391118  332015 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:24:00.391140  332015 ubuntu.go:190] setting up certificates
	I1123 09:24:00.391151  332015 provision.go:84] configureAuth start
	I1123 09:24:00.391221  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095
	I1123 09:24:00.416269  332015 provision.go:143] copyHostCerts
	I1123 09:24:00.416328  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:24:00.416373  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:24:00.416396  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:24:00.416502  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:24:00.416616  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:24:00.416643  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:24:00.416649  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:24:00.416685  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:24:00.416740  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:24:00.416764  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:24:00.416769  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:24:00.416796  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:24:00.416852  332015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.ha-857095 san=[127.0.0.1 192.168.49.2 ha-857095 localhost minikube]
	I1123 09:24:00.654716  332015 provision.go:177] copyRemoteCerts
	I1123 09:24:00.654793  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:24:00.654834  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:00.677057  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:00.781001  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1123 09:24:00.781107  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:24:00.798881  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1123 09:24:00.798961  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1123 09:24:00.816589  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1123 09:24:00.816669  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:24:00.834536  332015 provision.go:87] duration metric: took 443.371132ms to configureAuth
	I1123 09:24:00.834605  332015 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:24:00.834885  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:00.835007  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:00.852135  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:00.852465  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33182 <nil> <nil>}
	I1123 09:24:00.852484  332015 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:24:01.230722  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:24:01.230745  332015 machine.go:97] duration metric: took 4.515545369s to provisionDockerMachine
	I1123 09:24:01.230757  332015 start.go:293] postStartSetup for "ha-857095" (driver="docker")
	I1123 09:24:01.230784  332015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:24:01.230849  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:24:01.230895  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:01.255652  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:01.361493  332015 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:24:01.364819  332015 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:24:01.364849  332015 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:24:01.364861  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:24:01.364917  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:24:01.364992  332015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:24:01.365000  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /etc/ssl/certs/2849042.pem
	I1123 09:24:01.365102  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:24:01.373236  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:24:01.391175  332015 start.go:296] duration metric: took 160.402274ms for postStartSetup
	I1123 09:24:01.391305  332015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:24:01.391349  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:01.408403  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:01.514432  332015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:24:01.519130  332015 fix.go:56] duration metric: took 5.105336191s for fixHost
	I1123 09:24:01.519158  332015 start.go:83] releasing machines lock for "ha-857095", held for 5.105389919s
	I1123 09:24:01.519225  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095
	I1123 09:24:01.535905  332015 ssh_runner.go:195] Run: cat /version.json
	I1123 09:24:01.535965  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:01.536231  332015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:24:01.536282  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:01.562880  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:01.565249  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:01.665009  332015 ssh_runner.go:195] Run: systemctl --version
	I1123 09:24:01.757828  332015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:24:01.794910  332015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:24:01.799455  332015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:24:01.799605  332015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:24:01.807720  332015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:24:01.807746  332015 start.go:496] detecting cgroup driver to use...
	I1123 09:24:01.807800  332015 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:24:01.807878  332015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:24:01.822720  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:24:01.836248  332015 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:24:01.836404  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:24:01.853658  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:24:01.867264  332015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:24:01.974745  332015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:24:02.101306  332015 docker.go:234] disabling docker service ...
	I1123 09:24:02.101464  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:24:02.117932  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:24:02.131548  332015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:24:02.243604  332015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:24:02.362672  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:24:02.376516  332015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:24:02.391962  332015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:24:02.392048  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.400619  332015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:24:02.400698  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.410062  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.419774  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.429277  332015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:24:02.438031  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.447555  332015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.455833  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.464518  332015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:24:02.472029  332015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:24:02.479828  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:24:02.606510  332015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:24:02.773593  332015 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:24:02.773712  332015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:24:02.778273  332015 start.go:564] Will wait 60s for crictl version
	I1123 09:24:02.778386  332015 ssh_runner.go:195] Run: which crictl
	I1123 09:24:02.782031  332015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:24:02.805950  332015 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:24:02.806105  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:24:02.837219  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:24:02.868046  332015 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:24:02.870882  332015 cli_runner.go:164] Run: docker network inspect ha-857095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:24:02.888727  332015 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:24:02.893087  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:24:02.903114  332015 kubeadm.go:884] updating cluster {Name:ha-857095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:24:02.903266  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:24:02.903340  332015 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:24:02.938058  332015 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:24:02.938082  332015 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:24:02.938142  332015 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:24:02.965340  332015 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:24:02.965366  332015 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:24:02.965376  332015 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 09:24:02.965526  332015 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-857095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:24:02.965621  332015 ssh_runner.go:195] Run: crio config
	I1123 09:24:03.024329  332015 cni.go:84] Creating CNI manager for ""
	I1123 09:24:03.024405  332015 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1123 09:24:03.024439  332015 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:24:03.024493  332015 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-857095 NodeName:ha-857095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:24:03.024670  332015 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-857095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:24:03.024706  332015 kube-vip.go:115] generating kube-vip config ...
	I1123 09:24:03.024788  332015 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1123 09:24:03.037111  332015 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:24:03.037290  332015 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1123 09:24:03.037395  332015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:24:03.045237  332015 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:24:03.045328  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1123 09:24:03.053429  332015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1123 09:24:03.066204  332015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:24:03.078929  332015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1123 09:24:03.092229  332015 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1123 09:24:03.104792  332015 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1123 09:24:03.108474  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:24:03.118280  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:24:03.231167  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:24:03.246187  332015 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095 for IP: 192.168.49.2
	I1123 09:24:03.246257  332015 certs.go:195] generating shared ca certs ...
	I1123 09:24:03.246288  332015 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:03.246475  332015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:24:03.246549  332015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:24:03.246586  332015 certs.go:257] generating profile certs ...
	I1123 09:24:03.246711  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key
	I1123 09:24:03.246768  332015 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.fbc14aa1
	I1123 09:24:03.246799  332015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt.fbc14aa1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1123 09:24:03.300262  332015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt.fbc14aa1 ...
	I1123 09:24:03.300340  332015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt.fbc14aa1: {Name:mk96366c0e17998ceef956dc2b188d7321ecf01f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:03.300600  332015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.fbc14aa1 ...
	I1123 09:24:03.300633  332015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.fbc14aa1: {Name:mk3d8a4e6dd8546bed5a8d4ed49833bd7f302bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:03.300779  332015 certs.go:382] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt.fbc14aa1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt
	I1123 09:24:03.300944  332015 certs.go:386] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.fbc14aa1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key
	I1123 09:24:03.301074  332015 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key
	I1123 09:24:03.301086  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1123 09:24:03.301100  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1123 09:24:03.301112  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1123 09:24:03.301123  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1123 09:24:03.301134  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1123 09:24:03.301149  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1123 09:24:03.301161  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1123 09:24:03.301173  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1123 09:24:03.301228  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:24:03.301260  332015 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:24:03.301268  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:24:03.301296  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:24:03.301321  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:24:03.301343  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:24:03.301386  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:24:03.301443  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /usr/share/ca-certificates/2849042.pem
	I1123 09:24:03.301458  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:24:03.301469  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem -> /usr/share/ca-certificates/284904.pem
	I1123 09:24:03.302078  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:24:03.321449  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:24:03.344480  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:24:03.366943  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:24:03.388807  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:24:03.417016  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:24:03.441422  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:24:03.466546  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:24:03.486297  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:24:03.505713  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:24:03.523380  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:24:03.541787  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:24:03.553943  332015 ssh_runner.go:195] Run: openssl version
	I1123 09:24:03.560161  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:24:03.569902  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:24:03.574290  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:24:03.574428  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:24:03.615270  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:24:03.622991  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:24:03.631168  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:24:03.634814  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:24:03.634879  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:24:03.675522  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:24:03.683531  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:24:03.692221  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:24:03.695819  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:24:03.695881  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:24:03.736556  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:24:03.744109  332015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:24:03.747786  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:24:03.788604  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:24:03.830353  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:24:03.884091  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:24:03.938208  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:24:03.984397  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:24:04.045856  332015 kubeadm.go:401] StartCluster: {Name:ha-857095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:24:04.046023  332015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:24:04.046122  332015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:24:04.075084  332015 cri.go:89] found id: "3f803f0d2708c2458335864b38cbe1261399f59c726a34053cba0f4d0c4267e2"
	I1123 09:24:04.075156  332015 cri.go:89] found id: ""
	I1123 09:24:04.075240  332015 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:24:04.094693  332015 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:24:04Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:24:04.094840  332015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:24:04.120167  332015 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:24:04.120234  332015 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:24:04.120315  332015 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:24:04.133113  332015 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:24:04.133681  332015 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-857095" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:24:04.133848  332015 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "ha-857095" cluster setting kubeconfig missing "ha-857095" context setting]
	I1123 09:24:04.134501  332015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:04.135077  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:24:04.135715  332015 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 09:24:04.135788  332015 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 09:24:04.135810  332015 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 09:24:04.135854  332015 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 09:24:04.135880  332015 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 09:24:04.135767  332015 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1123 09:24:04.137370  332015 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:24:04.163446  332015 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1123 09:24:04.163519  332015 kubeadm.go:602] duration metric: took 43.265409ms to restartPrimaryControlPlane
	I1123 09:24:04.163544  332015 kubeadm.go:403] duration metric: took 117.700121ms to StartCluster
	I1123 09:24:04.163589  332015 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:04.163671  332015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:24:04.164252  332015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:04.164494  332015 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:24:04.164539  332015 start.go:242] waiting for startup goroutines ...
	I1123 09:24:04.164560  332015 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:24:04.165096  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:04.170570  332015 out.go:179] * Enabled addons: 
	I1123 09:24:04.174575  332015 addons.go:530] duration metric: took 10.006073ms for enable addons: enabled=[]
	I1123 09:24:04.174659  332015 start.go:247] waiting for cluster config update ...
	I1123 09:24:04.174681  332015 start.go:256] writing updated cluster config ...
	I1123 09:24:04.178275  332015 out.go:203] 
	I1123 09:24:04.181916  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:04.182093  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:24:04.188964  332015 out.go:179] * Starting "ha-857095-m02" control-plane node in "ha-857095" cluster
	I1123 09:24:04.192293  332015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:24:04.195633  332015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:24:04.198557  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:24:04.198653  332015 cache.go:65] Caching tarball of preloaded images
	I1123 09:24:04.198625  332015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:24:04.198998  332015 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:24:04.199035  332015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:24:04.199196  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:24:04.236750  332015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:24:04.236768  332015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:24:04.236781  332015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:24:04.236803  332015 start.go:360] acquireMachinesLock for ha-857095-m02: {Name:mk302f2371cf69337e911dfb76261e6364d80001 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:24:04.236853  332015 start.go:364] duration metric: took 36.242µs to acquireMachinesLock for "ha-857095-m02"
	I1123 09:24:04.236872  332015 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:24:04.236877  332015 fix.go:54] fixHost starting: m02
	I1123 09:24:04.237131  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m02 --format={{.State.Status}}
	I1123 09:24:04.264568  332015 fix.go:112] recreateIfNeeded on ha-857095-m02: state=Stopped err=<nil>
	W1123 09:24:04.264592  332015 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:24:04.268071  332015 out.go:252] * Restarting existing docker container for "ha-857095-m02" ...
	I1123 09:24:04.268150  332015 cli_runner.go:164] Run: docker start ha-857095-m02
	I1123 09:24:04.652204  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m02 --format={{.State.Status}}
	I1123 09:24:04.680714  332015 kic.go:430] container "ha-857095-m02" state is running.
	I1123 09:24:04.681090  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m02
	I1123 09:24:04.707062  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:24:04.707317  332015 machine.go:94] provisionDockerMachine start ...
	I1123 09:24:04.707387  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:04.741254  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:04.741586  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33187 <nil> <nil>}
	I1123 09:24:04.741597  332015 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:24:04.742229  332015 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34184->127.0.0.1:33187: read: connection reset by peer
	I1123 09:24:08.002494  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m02
	
	I1123 09:24:08.002568  332015 ubuntu.go:182] provisioning hostname "ha-857095-m02"
	I1123 09:24:08.002678  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:08.029049  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:08.029348  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33187 <nil> <nil>}
	I1123 09:24:08.029358  332015 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-857095-m02 && echo "ha-857095-m02" | sudo tee /etc/hostname
	I1123 09:24:08.253783  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m02
	
	I1123 09:24:08.253924  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:08.291114  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:08.291434  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33187 <nil> <nil>}
	I1123 09:24:08.291450  332015 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857095-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857095-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857095-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:24:08.491050  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:24:08.491119  332015 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:24:08.491152  332015 ubuntu.go:190] setting up certificates
	I1123 09:24:08.491194  332015 provision.go:84] configureAuth start
	I1123 09:24:08.491321  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m02
	I1123 09:24:08.526937  332015 provision.go:143] copyHostCerts
	I1123 09:24:08.526984  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:24:08.527020  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:24:08.527027  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:24:08.527102  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:24:08.527176  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:24:08.527192  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:24:08.527197  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:24:08.527222  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:24:08.527259  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:24:08.527274  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:24:08.527278  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:24:08.527300  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:24:08.527343  332015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.ha-857095-m02 san=[127.0.0.1 192.168.49.3 ha-857095-m02 localhost minikube]
	I1123 09:24:09.262765  332015 provision.go:177] copyRemoteCerts
	I1123 09:24:09.262880  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:24:09.262954  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:09.280151  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:09.397744  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1123 09:24:09.397799  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:24:09.444274  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1123 09:24:09.444335  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 09:24:09.474176  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1123 09:24:09.474229  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:24:09.500995  332015 provision.go:87] duration metric: took 1.009770735s to configureAuth
	I1123 09:24:09.501071  332015 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:24:09.501370  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:09.501570  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:09.541669  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:09.541983  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33187 <nil> <nil>}
	I1123 09:24:09.541997  332015 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:24:10.717183  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:24:10.717209  332015 machine.go:97] duration metric: took 6.009881771s to provisionDockerMachine
	I1123 09:24:10.717221  332015 start.go:293] postStartSetup for "ha-857095-m02" (driver="docker")
	I1123 09:24:10.717231  332015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:24:10.717289  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:24:10.717340  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:10.743261  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:10.873831  332015 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:24:10.882112  332015 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:24:10.882138  332015 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:24:10.882150  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:24:10.882203  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:24:10.882279  332015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:24:10.882286  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /etc/ssl/certs/2849042.pem
	I1123 09:24:10.882384  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:24:10.897705  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:24:10.928947  332015 start.go:296] duration metric: took 211.710763ms for postStartSetup
	I1123 09:24:10.929078  332015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:24:10.929161  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:10.965095  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:11.077996  332015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:24:11.083158  332015 fix.go:56] duration metric: took 6.846271288s for fixHost
	I1123 09:24:11.083241  332015 start.go:83] releasing machines lock for "ha-857095-m02", held for 6.846378251s
	I1123 09:24:11.083359  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m02
	I1123 09:24:11.142549  332015 out.go:179] * Found network options:
	I1123 09:24:11.145622  332015 out.go:179]   - NO_PROXY=192.168.49.2
	W1123 09:24:11.148481  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:24:11.148526  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	I1123 09:24:11.148594  332015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:24:11.148633  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:11.148887  332015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:24:11.148950  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:11.179109  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:11.188793  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:11.691645  332015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:24:11.715317  332015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:24:11.715396  332015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:24:11.749537  332015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:24:11.749567  332015 start.go:496] detecting cgroup driver to use...
	I1123 09:24:11.749599  332015 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:24:11.749652  332015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:24:11.790996  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:24:11.823649  332015 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:24:11.823714  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:24:11.850236  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:24:11.868366  332015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:24:12.144454  332015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:24:12.480952  332015 docker.go:234] disabling docker service ...
	I1123 09:24:12.481086  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:24:12.566871  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:24:12.598895  332015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:24:12.943816  332015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:24:13.198846  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:24:13.220755  332015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:24:13.238071  332015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:24:13.238185  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.246445  332015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:24:13.246513  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.254941  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.263305  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.271300  332015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:24:13.278821  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.288129  332015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.296195  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.304236  332015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:24:13.311253  332015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:24:13.318479  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:24:13.535394  332015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:25:43.810612  332015 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.275182672s)
	I1123 09:25:43.810639  332015 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:25:43.810701  332015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:25:43.814933  332015 start.go:564] Will wait 60s for crictl version
	I1123 09:25:43.814992  332015 ssh_runner.go:195] Run: which crictl
	I1123 09:25:43.818922  332015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:25:43.846107  332015 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:25:43.846200  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:25:43.877706  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:25:43.909681  332015 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:25:43.912736  332015 out.go:179]   - env NO_PROXY=192.168.49.2
	I1123 09:25:43.915738  332015 cli_runner.go:164] Run: docker network inspect ha-857095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:25:43.931587  332015 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:25:43.935281  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:25:43.944694  332015 mustload.go:66] Loading cluster: ha-857095
	I1123 09:25:43.944941  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:43.945204  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:25:43.962501  332015 host.go:66] Checking if "ha-857095" exists ...
	I1123 09:25:43.962775  332015 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095 for IP: 192.168.49.3
	I1123 09:25:43.962789  332015 certs.go:195] generating shared ca certs ...
	I1123 09:25:43.962805  332015 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:25:43.962924  332015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:25:43.962987  332015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:25:43.962999  332015 certs.go:257] generating profile certs ...
	I1123 09:25:43.963077  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key
	I1123 09:25:43.963146  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.66daad91
	I1123 09:25:43.963186  332015 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key
	I1123 09:25:43.963194  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1123 09:25:43.963206  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1123 09:25:43.963217  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1123 09:25:43.963237  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1123 09:25:43.963248  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1123 09:25:43.963258  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1123 09:25:43.963270  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1123 09:25:43.963281  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1123 09:25:43.963328  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:25:43.963357  332015 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:25:43.963369  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:25:43.963395  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:25:43.963419  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:25:43.963442  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:25:43.963488  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:25:43.963520  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem -> /usr/share/ca-certificates/284904.pem
	I1123 09:25:43.963531  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /usr/share/ca-certificates/2849042.pem
	I1123 09:25:43.963542  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:43.963592  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:25:43.980751  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:25:44.081825  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1123 09:25:44.085802  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1123 09:25:44.094194  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1123 09:25:44.097956  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1123 09:25:44.106256  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1123 09:25:44.110273  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1123 09:25:44.118652  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1123 09:25:44.122439  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1123 09:25:44.130532  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1123 09:25:44.133997  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1123 09:25:44.142041  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1123 09:25:44.145750  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1123 09:25:44.154268  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:25:44.174536  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:25:44.191976  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:25:44.210168  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:25:44.228737  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:25:44.246711  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:25:44.264397  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:25:44.282548  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:25:44.301229  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:25:44.321400  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:25:44.340621  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:25:44.360219  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1123 09:25:44.374691  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1123 09:25:44.388106  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1123 09:25:44.402723  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1123 09:25:44.416062  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1123 09:25:44.429635  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1123 09:25:44.443050  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1123 09:25:44.456835  332015 ssh_runner.go:195] Run: openssl version
	I1123 09:25:44.463525  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:25:44.472737  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:25:44.476731  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:25:44.476844  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:25:44.517979  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:25:44.525850  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:25:44.536453  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:25:44.542534  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:25:44.542604  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:25:44.599744  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:25:44.613671  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:25:44.626248  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:44.630279  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:44.630347  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:44.717285  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:25:44.727653  332015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:25:44.734478  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:25:44.781588  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:25:44.834781  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:25:44.900074  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:25:44.968766  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:25:45.046196  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:25:45.126791  332015 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1123 09:25:45.126936  332015 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-857095-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:25:45.126971  332015 kube-vip.go:115] generating kube-vip config ...
	I1123 09:25:45.127039  332015 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1123 09:25:45.160018  332015 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:25:45.160101  332015 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1123 09:25:45.160193  332015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:25:45.182092  332015 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:25:45.182333  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1123 09:25:45.194768  332015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 09:25:45.221310  332015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:25:45.267324  332015 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1123 09:25:45.295755  332015 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1123 09:25:45.299778  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:25:45.311639  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:25:45.546785  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:25:45.562952  332015 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:25:45.563297  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:45.567237  332015 out.go:179] * Verifying Kubernetes components...
	I1123 09:25:45.570200  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:25:45.791204  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:25:45.805221  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1123 09:25:45.805300  332015 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1123 09:25:45.805568  332015 node_ready.go:35] waiting up to 6m0s for node "ha-857095-m02" to be "Ready" ...
	I1123 09:25:46.975594  332015 node_ready.go:49] node "ha-857095-m02" is "Ready"
	I1123 09:25:46.975708  332015 node_ready.go:38] duration metric: took 1.170108444s for node "ha-857095-m02" to be "Ready" ...
	I1123 09:25:46.975722  332015 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:25:46.979095  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:25:47.015790  332015 api_server.go:72] duration metric: took 1.452452994s to wait for apiserver process to appear ...
	I1123 09:25:47.015827  332015 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:25:47.015848  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:47.055731  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:25:47.055771  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:25:47.516044  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:47.524553  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:47.524596  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:48.015961  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:48.027139  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:48.027189  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:48.516751  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:48.530354  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:48.530386  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:49.015933  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:49.026181  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:49.026224  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:49.516868  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:49.544816  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:49.544849  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:50.015977  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:50.031576  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 09:25:50.034154  332015 api_server.go:141] control plane version: v1.34.1
	I1123 09:25:50.034191  332015 api_server.go:131] duration metric: took 3.018357536s to wait for apiserver health ...
	I1123 09:25:50.034201  332015 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:25:50.133527  332015 system_pods.go:59] 26 kube-system pods found
	I1123 09:25:50.133574  332015 system_pods.go:61] "coredns-66bc5c9577-gqskt" [9ec3e73a-4033-41ae-927a-50584a3e9653] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:25:50.133583  332015 system_pods.go:61] "coredns-66bc5c9577-kqvhl" [bcbbf58b-9d2d-4a51-b4c1-bfec16447df5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:25:50.133590  332015 system_pods.go:61] "etcd-ha-857095" [3eaffe71-9ce6-4a9b-8530-1de6a4ec8773] Running
	I1123 09:25:50.133596  332015 system_pods.go:61] "etcd-ha-857095-m02" [5f8628c9-5725-4ca9-9622-b42a9b63c833] Running
	I1123 09:25:50.133600  332015 system_pods.go:61] "etcd-ha-857095-m03" [2ec71863-ebd8-45ca-9f19-707503671154] Running
	I1123 09:25:50.133603  332015 system_pods.go:61] "kindnet-8bs9t" [d9dee210-2075-4095-8540-c13c401e5a68] Running
	I1123 09:25:50.133607  332015 system_pods.go:61] "kindnet-ls8hm" [b7c7ef9d-ebdd-4bd4-97e6-595b84787117] Running
	I1123 09:25:50.133622  332015 system_pods.go:61] "kindnet-r7p2c" [a4f419f5-ecbc-48e6-8f98-732c4ac5a977] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:25:50.133636  332015 system_pods.go:61] "kindnet-v5cch" [4bfed9c2-b321-43a0-a18b-c867696cf4cb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:25:50.133642  332015 system_pods.go:61] "kube-apiserver-ha-857095" [697606bd-c111-4922-adda-6902a7f40915] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:25:50.133647  332015 system_pods.go:61] "kube-apiserver-ha-857095-m02" [8516bae2-f830-4a82-aa30-dbd7bf657b52] Running
	I1123 09:25:50.133659  332015 system_pods.go:61] "kube-apiserver-ha-857095-m03" [9f6f5d7d-9bba-4b26-b928-05119bbc98af] Running
	I1123 09:25:50.133666  332015 system_pods.go:61] "kube-controller-manager-ha-857095" [026d1873-0078-4c87-a9c1-b5a615844bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:25:50.133671  332015 system_pods.go:61] "kube-controller-manager-ha-857095-m02" [51f4d1ee-3b47-49f2-907e-68598e7d88e1] Running
	I1123 09:25:50.133694  332015 system_pods.go:61] "kube-controller-manager-ha-857095-m03" [234e7d83-1430-4ee4-91e4-73bf5e7221dc] Running
	I1123 09:25:50.133700  332015 system_pods.go:61] "kube-proxy-275zc" [b46e4648-46c6-4f04-85bc-bbfd4aedc821] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:25:50.133704  332015 system_pods.go:61] "kube-proxy-6k46z" [f2387038-f806-4417-961a-cf4390f4b4a5] Running
	I1123 09:25:50.133712  332015 system_pods.go:61] "kube-proxy-9qgbr" [a03beba1-4074-45e0-a3a0-a4cf0917b9a8] Running
	I1123 09:25:50.133715  332015 system_pods.go:61] "kube-proxy-lqqmc" [81a61d2b-bb1b-46d7-9acc-035150e8061b] Running
	I1123 09:25:50.133721  332015 system_pods.go:61] "kube-scheduler-ha-857095" [0598722f-31ac-4529-8b00-94c9bccf8255] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:25:50.133732  332015 system_pods.go:61] "kube-scheduler-ha-857095-m02" [0d16a804-69c1-47f1-b32c-3b35f950765f] Running
	I1123 09:25:50.133737  332015 system_pods.go:61] "kube-scheduler-ha-857095-m03" [aaf4d61f-0ec3-4e06-912a-a87fc3ab3cdb] Running
	I1123 09:25:50.133741  332015 system_pods.go:61] "kube-vip-ha-857095" [41b5690c-90a6-4557-9e9c-fcb76fe0c548] Running
	I1123 09:25:50.133753  332015 system_pods.go:61] "kube-vip-ha-857095-m02" [9c7a58ce-d823-401a-9695-36a0b87ab3ca] Running
	I1123 09:25:50.133757  332015 system_pods.go:61] "kube-vip-ha-857095-m03" [3830c657-5386-4214-a319-d42e19a40c12] Running
	I1123 09:25:50.133761  332015 system_pods.go:61] "storage-provisioner" [fd6347d8-5602-4a34-875b-811bc8ea2bc2] Running
	I1123 09:25:50.133772  332015 system_pods.go:74] duration metric: took 99.565974ms to wait for pod list to return data ...
	I1123 09:25:50.133785  332015 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:25:50.235662  332015 default_sa.go:45] found service account: "default"
	I1123 09:25:50.235698  332015 default_sa.go:55] duration metric: took 101.906307ms for default service account to be created ...
	I1123 09:25:50.235710  332015 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:25:50.276224  332015 system_pods.go:86] 26 kube-system pods found
	I1123 09:25:50.276258  332015 system_pods.go:89] "coredns-66bc5c9577-gqskt" [9ec3e73a-4033-41ae-927a-50584a3e9653] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:25:50.276269  332015 system_pods.go:89] "coredns-66bc5c9577-kqvhl" [bcbbf58b-9d2d-4a51-b4c1-bfec16447df5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:25:50.276284  332015 system_pods.go:89] "etcd-ha-857095" [3eaffe71-9ce6-4a9b-8530-1de6a4ec8773] Running
	I1123 09:25:50.276290  332015 system_pods.go:89] "etcd-ha-857095-m02" [5f8628c9-5725-4ca9-9622-b42a9b63c833] Running
	I1123 09:25:50.276295  332015 system_pods.go:89] "etcd-ha-857095-m03" [2ec71863-ebd8-45ca-9f19-707503671154] Running
	I1123 09:25:50.276300  332015 system_pods.go:89] "kindnet-8bs9t" [d9dee210-2075-4095-8540-c13c401e5a68] Running
	I1123 09:25:50.276308  332015 system_pods.go:89] "kindnet-ls8hm" [b7c7ef9d-ebdd-4bd4-97e6-595b84787117] Running
	I1123 09:25:50.276314  332015 system_pods.go:89] "kindnet-r7p2c" [a4f419f5-ecbc-48e6-8f98-732c4ac5a977] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:25:50.276328  332015 system_pods.go:89] "kindnet-v5cch" [4bfed9c2-b321-43a0-a18b-c867696cf4cb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:25:50.276336  332015 system_pods.go:89] "kube-apiserver-ha-857095" [697606bd-c111-4922-adda-6902a7f40915] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:25:50.276345  332015 system_pods.go:89] "kube-apiserver-ha-857095-m02" [8516bae2-f830-4a82-aa30-dbd7bf657b52] Running
	I1123 09:25:50.276356  332015 system_pods.go:89] "kube-apiserver-ha-857095-m03" [9f6f5d7d-9bba-4b26-b928-05119bbc98af] Running
	I1123 09:25:50.276368  332015 system_pods.go:89] "kube-controller-manager-ha-857095" [026d1873-0078-4c87-a9c1-b5a615844bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:25:50.276374  332015 system_pods.go:89] "kube-controller-manager-ha-857095-m02" [51f4d1ee-3b47-49f2-907e-68598e7d88e1] Running
	I1123 09:25:50.276389  332015 system_pods.go:89] "kube-controller-manager-ha-857095-m03" [234e7d83-1430-4ee4-91e4-73bf5e7221dc] Running
	I1123 09:25:50.276395  332015 system_pods.go:89] "kube-proxy-275zc" [b46e4648-46c6-4f04-85bc-bbfd4aedc821] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:25:50.276399  332015 system_pods.go:89] "kube-proxy-6k46z" [f2387038-f806-4417-961a-cf4390f4b4a5] Running
	I1123 09:25:50.276405  332015 system_pods.go:89] "kube-proxy-9qgbr" [a03beba1-4074-45e0-a3a0-a4cf0917b9a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:25:50.276409  332015 system_pods.go:89] "kube-proxy-lqqmc" [81a61d2b-bb1b-46d7-9acc-035150e8061b] Running
	I1123 09:25:50.276418  332015 system_pods.go:89] "kube-scheduler-ha-857095" [0598722f-31ac-4529-8b00-94c9bccf8255] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:25:50.276439  332015 system_pods.go:89] "kube-scheduler-ha-857095-m02" [0d16a804-69c1-47f1-b32c-3b35f950765f] Running
	I1123 09:25:50.276443  332015 system_pods.go:89] "kube-scheduler-ha-857095-m03" [aaf4d61f-0ec3-4e06-912a-a87fc3ab3cdb] Running
	I1123 09:25:50.276448  332015 system_pods.go:89] "kube-vip-ha-857095" [41b5690c-90a6-4557-9e9c-fcb76fe0c548] Running
	I1123 09:25:50.276452  332015 system_pods.go:89] "kube-vip-ha-857095-m02" [9c7a58ce-d823-401a-9695-36a0b87ab3ca] Running
	I1123 09:25:50.276459  332015 system_pods.go:89] "kube-vip-ha-857095-m03" [3830c657-5386-4214-a319-d42e19a40c12] Running
	I1123 09:25:50.276463  332015 system_pods.go:89] "storage-provisioner" [fd6347d8-5602-4a34-875b-811bc8ea2bc2] Running
	I1123 09:25:50.276469  332015 system_pods.go:126] duration metric: took 40.753939ms to wait for k8s-apps to be running ...
	I1123 09:25:50.276477  332015 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:25:50.276538  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:25:50.296938  332015 system_svc.go:56] duration metric: took 20.452092ms WaitForService to wait for kubelet
	I1123 09:25:50.296975  332015 kubeadm.go:587] duration metric: took 4.73364502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:25:50.296993  332015 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:25:50.317399  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:25:50.317469  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:25:50.317482  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:25:50.317487  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:25:50.317491  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:25:50.317495  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:25:50.317499  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:25:50.317511  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:25:50.317521  332015 node_conditions.go:105] duration metric: took 20.520835ms to run NodePressure ...
	I1123 09:25:50.317534  332015 start.go:242] waiting for startup goroutines ...
	I1123 09:25:50.317564  332015 start.go:256] writing updated cluster config ...
	I1123 09:25:50.323143  332015 out.go:203] 
	I1123 09:25:50.326401  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:50.326524  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:25:50.330156  332015 out.go:179] * Starting "ha-857095-m03" control-plane node in "ha-857095" cluster
	I1123 09:25:50.334438  332015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:25:50.338097  332015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:25:50.340607  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:25:50.340654  332015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:25:50.340838  332015 cache.go:65] Caching tarball of preloaded images
	I1123 09:25:50.340926  332015 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:25:50.340940  332015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:25:50.341072  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:25:50.370726  332015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:25:50.370752  332015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:25:50.370766  332015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:25:50.370789  332015 start.go:360] acquireMachinesLock for ha-857095-m03: {Name:mk6acf38570d035eb912e1d2f030641425a2af59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:25:50.370845  332015 start.go:364] duration metric: took 36.226µs to acquireMachinesLock for "ha-857095-m03"
	I1123 09:25:50.370869  332015 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:25:50.370875  332015 fix.go:54] fixHost starting: m03
	I1123 09:25:50.371144  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m03 --format={{.State.Status}}
	I1123 09:25:50.400510  332015 fix.go:112] recreateIfNeeded on ha-857095-m03: state=Stopped err=<nil>
	W1123 09:25:50.400540  332015 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:25:50.404410  332015 out.go:252] * Restarting existing docker container for "ha-857095-m03" ...
	I1123 09:25:50.404500  332015 cli_runner.go:164] Run: docker start ha-857095-m03
	I1123 09:25:50.796227  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m03 --format={{.State.Status}}
	I1123 09:25:50.840417  332015 kic.go:430] container "ha-857095-m03" state is running.
	I1123 09:25:50.840758  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m03
	I1123 09:25:50.894166  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:25:50.894416  332015 machine.go:94] provisionDockerMachine start ...
	I1123 09:25:50.894479  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:50.924984  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:25:50.925293  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1123 09:25:50.925301  332015 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:25:50.926098  332015 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 09:25:54.161789  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m03
	
	I1123 09:25:54.161877  332015 ubuntu.go:182] provisioning hostname "ha-857095-m03"
	I1123 09:25:54.161974  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:54.189870  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:25:54.190176  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1123 09:25:54.190186  332015 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-857095-m03 && echo "ha-857095-m03" | sudo tee /etc/hostname
	I1123 09:25:54.416867  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m03
	
	I1123 09:25:54.416961  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:54.451607  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:25:54.451922  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1123 09:25:54.451938  332015 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857095-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857095-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857095-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:25:54.684288  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:25:54.684344  332015 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:25:54.684362  332015 ubuntu.go:190] setting up certificates
	I1123 09:25:54.684372  332015 provision.go:84] configureAuth start
	I1123 09:25:54.684450  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m03
	I1123 09:25:54.708108  332015 provision.go:143] copyHostCerts
	I1123 09:25:54.708151  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:25:54.708186  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:25:54.708192  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:25:54.708273  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:25:54.708351  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:25:54.708368  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:25:54.708373  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:25:54.708399  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:25:54.708439  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:25:54.708455  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:25:54.708459  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:25:54.708484  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:25:54.708532  332015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.ha-857095-m03 san=[127.0.0.1 192.168.49.4 ha-857095-m03 localhost minikube]
	I1123 09:25:54.877285  332015 provision.go:177] copyRemoteCerts
	I1123 09:25:54.877362  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:25:54.877428  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:54.897354  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:55.052011  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1123 09:25:55.052077  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 09:25:55.110347  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1123 09:25:55.110418  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:25:55.160630  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1123 09:25:55.160706  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:25:55.206791  332015 provision.go:87] duration metric: took 522.405111ms to configureAuth
	I1123 09:25:55.206859  332015 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:25:55.207143  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:55.207288  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:55.231475  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:25:55.231787  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1123 09:25:55.231807  332015 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:25:55.818269  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:25:55.818294  332015 machine.go:97] duration metric: took 4.923860996s to provisionDockerMachine
	I1123 09:25:55.818307  332015 start.go:293] postStartSetup for "ha-857095-m03" (driver="docker")
	I1123 09:25:55.818318  332015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:25:55.818419  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:25:55.818465  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:55.838899  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:55.945315  332015 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:25:55.948680  332015 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:25:55.948711  332015 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:25:55.948723  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:25:55.948779  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:25:55.948855  332015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:25:55.948865  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /etc/ssl/certs/2849042.pem
	I1123 09:25:55.948961  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:25:55.956253  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:25:55.975283  332015 start.go:296] duration metric: took 156.955332ms for postStartSetup
	I1123 09:25:55.975413  332015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:25:55.975489  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:55.995364  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:56.102831  332015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:25:56.108114  332015 fix.go:56] duration metric: took 5.737232288s for fixHost
	I1123 09:25:56.108138  332015 start.go:83] releasing machines lock for "ha-857095-m03", held for 5.737279936s
	I1123 09:25:56.108206  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m03
	I1123 09:25:56.129684  332015 out.go:179] * Found network options:
	I1123 09:25:56.132653  332015 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1123 09:25:56.138460  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:25:56.138495  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:25:56.138520  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:25:56.138534  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	I1123 09:25:56.138602  332015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:25:56.138645  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:56.138894  332015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:25:56.138945  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:56.160178  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:56.178028  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:56.510498  332015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:25:56.519235  332015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:25:56.519358  332015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:25:56.532899  332015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:25:56.532974  332015 start.go:496] detecting cgroup driver to use...
	I1123 09:25:56.533021  332015 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:25:56.533095  332015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:25:56.563353  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:25:56.582194  332015 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:25:56.582307  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:25:56.604304  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:25:56.624857  332015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:25:56.880123  332015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:25:57.130771  332015 docker.go:234] disabling docker service ...
	I1123 09:25:57.130892  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:25:57.155366  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:25:57.181953  332015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:25:57.470517  332015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:25:57.703602  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:25:57.722751  332015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:25:57.754960  332015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:25:57.755080  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.788981  332015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:25:57.789102  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.805042  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.815549  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.830253  332015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:25:57.840395  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.853329  332015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.867204  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.882910  332015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:25:57.895568  332015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:25:57.910955  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:25:58.202730  332015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:25:59.499296  332015 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.296482203s)
	I1123 09:25:59.499324  332015 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:25:59.499400  332015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:25:59.503386  332015 start.go:564] Will wait 60s for crictl version
	I1123 09:25:59.503504  332015 ssh_runner.go:195] Run: which crictl
	I1123 09:25:59.507281  332015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:25:59.537756  332015 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:25:59.537841  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:25:59.571202  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:25:59.604176  332015 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:25:59.607193  332015 out.go:179]   - env NO_PROXY=192.168.49.2
	I1123 09:25:59.610134  332015 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1123 09:25:59.613170  332015 cli_runner.go:164] Run: docker network inspect ha-857095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:25:59.630136  332015 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:25:59.634914  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:25:59.644730  332015 mustload.go:66] Loading cluster: ha-857095
	I1123 09:25:59.644972  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:59.645239  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:25:59.662875  332015 host.go:66] Checking if "ha-857095" exists ...
	I1123 09:25:59.663179  332015 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095 for IP: 192.168.49.4
	I1123 09:25:59.663188  332015 certs.go:195] generating shared ca certs ...
	I1123 09:25:59.663201  332015 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:25:59.663327  332015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:25:59.663365  332015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:25:59.663372  332015 certs.go:257] generating profile certs ...
	I1123 09:25:59.663446  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key
	I1123 09:25:59.663522  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.283ff493
	I1123 09:25:59.663567  332015 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key
	I1123 09:25:59.663575  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1123 09:25:59.663589  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1123 09:25:59.663601  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1123 09:25:59.663612  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1123 09:25:59.663621  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1123 09:25:59.663633  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1123 09:25:59.663644  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1123 09:25:59.663654  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1123 09:25:59.663702  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:25:59.663734  332015 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:25:59.663742  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:25:59.663771  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:25:59.663797  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:25:59.663820  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:25:59.663870  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:25:59.663898  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem -> /usr/share/ca-certificates/284904.pem
	I1123 09:25:59.663912  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /usr/share/ca-certificates/2849042.pem
	I1123 09:25:59.663923  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:59.663978  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:25:59.689941  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:25:59.793738  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1123 09:25:59.797235  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1123 09:25:59.805196  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1123 09:25:59.808653  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1123 09:25:59.816623  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1123 09:25:59.819984  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1123 09:25:59.828037  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1123 09:25:59.831812  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1123 09:25:59.839915  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1123 09:25:59.843477  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1123 09:25:59.851542  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1123 09:25:59.855295  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1123 09:25:59.863949  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:25:59.885646  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:25:59.904286  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:25:59.924769  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:25:59.944702  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:25:59.963610  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:25:59.984488  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:26:00.117342  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:26:00.182322  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:26:00.220393  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:26:00.303614  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:26:00.335892  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1123 09:26:00.355160  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1123 09:26:00.374206  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1123 09:26:00.392709  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1123 09:26:00.409109  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1123 09:26:00.425117  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1123 09:26:00.439914  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1123 09:26:00.464465  332015 ssh_runner.go:195] Run: openssl version
	I1123 09:26:00.472524  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:26:00.483656  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:26:00.487711  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:26:00.487827  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:26:00.532783  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:26:00.543887  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:26:00.551979  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:26:00.555635  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:26:00.555720  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:26:00.597611  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:26:00.605512  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:26:00.613913  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:00.617669  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:00.617766  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:00.660921  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:26:00.669960  332015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:26:00.674647  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:26:00.723335  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:26:00.764258  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:26:00.804912  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:26:00.845808  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:26:00.888833  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:26:00.931554  332015 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1123 09:26:00.931679  332015 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-857095-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:26:00.931715  332015 kube-vip.go:115] generating kube-vip config ...
	I1123 09:26:00.931766  332015 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1123 09:26:00.944231  332015 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:26:00.944300  332015 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1123 09:26:00.944366  332015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:26:00.952127  332015 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:26:00.952218  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1123 09:26:00.959898  332015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 09:26:00.974683  332015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:26:00.988424  332015 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1123 09:26:01.007528  332015 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1123 09:26:01.011388  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:26:01.021832  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:01.167574  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:26:01.186465  332015 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:26:01.187024  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:01.191851  332015 out.go:179] * Verifying Kubernetes components...
	I1123 09:26:01.194848  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:01.336348  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:26:01.352032  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1123 09:26:01.352169  332015 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1123 09:26:01.352449  332015 node_ready.go:35] waiting up to 6m0s for node "ha-857095-m03" to be "Ready" ...
	I1123 09:26:01.355787  332015 node_ready.go:49] node "ha-857095-m03" is "Ready"
	I1123 09:26:01.355816  332015 node_ready.go:38] duration metric: took 3.32939ms for node "ha-857095-m03" to be "Ready" ...
	I1123 09:26:01.355830  332015 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:26:01.355885  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:01.856392  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:02.356689  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:02.856504  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:03.356575  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:03.856101  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:04.356803  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:04.856202  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:05.356951  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:05.856542  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:06.356037  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:06.856518  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:07.356012  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:07.856915  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:08.356635  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:08.856266  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:09.356016  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:09.375020  332015 api_server.go:72] duration metric: took 8.188500317s to wait for apiserver process to appear ...
	I1123 09:26:09.375044  332015 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:26:09.375064  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:26:09.384535  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 09:26:09.386418  332015 api_server.go:141] control plane version: v1.34.1
	I1123 09:26:09.386440  332015 api_server.go:131] duration metric: took 11.388651ms to wait for apiserver health ...
	I1123 09:26:09.386448  332015 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:26:09.406325  332015 system_pods.go:59] 26 kube-system pods found
	I1123 09:26:09.407759  332015 system_pods.go:61] "coredns-66bc5c9577-gqskt" [9ec3e73a-4033-41ae-927a-50584a3e9653] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:26:09.407805  332015 system_pods.go:61] "coredns-66bc5c9577-kqvhl" [bcbbf58b-9d2d-4a51-b4c1-bfec16447df5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:26:09.407831  332015 system_pods.go:61] "etcd-ha-857095" [3eaffe71-9ce6-4a9b-8530-1de6a4ec8773] Running
	I1123 09:26:09.407852  332015 system_pods.go:61] "etcd-ha-857095-m02" [5f8628c9-5725-4ca9-9622-b42a9b63c833] Running
	I1123 09:26:09.407873  332015 system_pods.go:61] "etcd-ha-857095-m03" [2ec71863-ebd8-45ca-9f19-707503671154] Running
	I1123 09:26:09.407906  332015 system_pods.go:61] "kindnet-8bs9t" [d9dee210-2075-4095-8540-c13c401e5a68] Running
	I1123 09:26:09.407931  332015 system_pods.go:61] "kindnet-ls8hm" [b7c7ef9d-ebdd-4bd4-97e6-595b84787117] Running
	I1123 09:26:09.407950  332015 system_pods.go:61] "kindnet-r7p2c" [a4f419f5-ecbc-48e6-8f98-732c4ac5a977] Running
	I1123 09:26:09.407971  332015 system_pods.go:61] "kindnet-v5cch" [4bfed9c2-b321-43a0-a18b-c867696cf4cb] Running
	I1123 09:26:09.407992  332015 system_pods.go:61] "kube-apiserver-ha-857095" [697606bd-c111-4922-adda-6902a7f40915] Running
	I1123 09:26:09.408020  332015 system_pods.go:61] "kube-apiserver-ha-857095-m02" [8516bae2-f830-4a82-aa30-dbd7bf657b52] Running
	I1123 09:26:09.408046  332015 system_pods.go:61] "kube-apiserver-ha-857095-m03" [9f6f5d7d-9bba-4b26-b928-05119bbc98af] Running
	I1123 09:26:09.408073  332015 system_pods.go:61] "kube-controller-manager-ha-857095" [026d1873-0078-4c87-a9c1-b5a615844bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:26:09.408095  332015 system_pods.go:61] "kube-controller-manager-ha-857095-m02" [51f4d1ee-3b47-49f2-907e-68598e7d88e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:26:09.408128  332015 system_pods.go:61] "kube-controller-manager-ha-857095-m03" [234e7d83-1430-4ee4-91e4-73bf5e7221dc] Running
	I1123 09:26:09.408158  332015 system_pods.go:61] "kube-proxy-275zc" [b46e4648-46c6-4f04-85bc-bbfd4aedc821] Running
	I1123 09:26:09.408180  332015 system_pods.go:61] "kube-proxy-6k46z" [f2387038-f806-4417-961a-cf4390f4b4a5] Running
	I1123 09:26:09.408201  332015 system_pods.go:61] "kube-proxy-9qgbr" [a03beba1-4074-45e0-a3a0-a4cf0917b9a8] Running
	I1123 09:26:09.408237  332015 system_pods.go:61] "kube-proxy-lqqmc" [81a61d2b-bb1b-46d7-9acc-035150e8061b] Running
	I1123 09:26:09.408274  332015 system_pods.go:61] "kube-scheduler-ha-857095" [0598722f-31ac-4529-8b00-94c9bccf8255] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:26:09.408299  332015 system_pods.go:61] "kube-scheduler-ha-857095-m02" [0d16a804-69c1-47f1-b32c-3b35f950765f] Running
	I1123 09:26:09.408319  332015 system_pods.go:61] "kube-scheduler-ha-857095-m03" [aaf4d61f-0ec3-4e06-912a-a87fc3ab3cdb] Running
	I1123 09:26:09.408352  332015 system_pods.go:61] "kube-vip-ha-857095" [41b5690c-90a6-4557-9e9c-fcb76fe0c548] Running
	I1123 09:26:09.408380  332015 system_pods.go:61] "kube-vip-ha-857095-m02" [9c7a58ce-d823-401a-9695-36a0b87ab3ca] Running
	I1123 09:26:09.408432  332015 system_pods.go:61] "kube-vip-ha-857095-m03" [3830c657-5386-4214-a319-d42e19a40c12] Running
	I1123 09:26:09.408457  332015 system_pods.go:61] "storage-provisioner" [fd6347d8-5602-4a34-875b-811bc8ea2bc2] Running
	I1123 09:26:09.408480  332015 system_pods.go:74] duration metric: took 22.024671ms to wait for pod list to return data ...
	I1123 09:26:09.408503  332015 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:26:09.412561  332015 default_sa.go:45] found service account: "default"
	I1123 09:26:09.412632  332015 default_sa.go:55] duration metric: took 4.107335ms for default service account to be created ...
	I1123 09:26:09.412660  332015 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:26:09.420811  332015 system_pods.go:86] 26 kube-system pods found
	I1123 09:26:09.420908  332015 system_pods.go:89] "coredns-66bc5c9577-gqskt" [9ec3e73a-4033-41ae-927a-50584a3e9653] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:26:09.420935  332015 system_pods.go:89] "coredns-66bc5c9577-kqvhl" [bcbbf58b-9d2d-4a51-b4c1-bfec16447df5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:26:09.420976  332015 system_pods.go:89] "etcd-ha-857095" [3eaffe71-9ce6-4a9b-8530-1de6a4ec8773] Running
	I1123 09:26:09.421010  332015 system_pods.go:89] "etcd-ha-857095-m02" [5f8628c9-5725-4ca9-9622-b42a9b63c833] Running
	I1123 09:26:09.421033  332015 system_pods.go:89] "etcd-ha-857095-m03" [2ec71863-ebd8-45ca-9f19-707503671154] Running
	I1123 09:26:09.421055  332015 system_pods.go:89] "kindnet-8bs9t" [d9dee210-2075-4095-8540-c13c401e5a68] Running
	I1123 09:26:09.421089  332015 system_pods.go:89] "kindnet-ls8hm" [b7c7ef9d-ebdd-4bd4-97e6-595b84787117] Running
	I1123 09:26:09.421118  332015 system_pods.go:89] "kindnet-r7p2c" [a4f419f5-ecbc-48e6-8f98-732c4ac5a977] Running
	I1123 09:26:09.421158  332015 system_pods.go:89] "kindnet-v5cch" [4bfed9c2-b321-43a0-a18b-c867696cf4cb] Running
	I1123 09:26:09.421187  332015 system_pods.go:89] "kube-apiserver-ha-857095" [697606bd-c111-4922-adda-6902a7f40915] Running
	I1123 09:26:09.421211  332015 system_pods.go:89] "kube-apiserver-ha-857095-m02" [8516bae2-f830-4a82-aa30-dbd7bf657b52] Running
	I1123 09:26:09.421233  332015 system_pods.go:89] "kube-apiserver-ha-857095-m03" [9f6f5d7d-9bba-4b26-b928-05119bbc98af] Running
	I1123 09:26:09.421274  332015 system_pods.go:89] "kube-controller-manager-ha-857095" [026d1873-0078-4c87-a9c1-b5a615844bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:26:09.421303  332015 system_pods.go:89] "kube-controller-manager-ha-857095-m02" [51f4d1ee-3b47-49f2-907e-68598e7d88e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:26:09.421325  332015 system_pods.go:89] "kube-controller-manager-ha-857095-m03" [234e7d83-1430-4ee4-91e4-73bf5e7221dc] Running
	I1123 09:26:09.421348  332015 system_pods.go:89] "kube-proxy-275zc" [b46e4648-46c6-4f04-85bc-bbfd4aedc821] Running
	I1123 09:26:09.421385  332015 system_pods.go:89] "kube-proxy-6k46z" [f2387038-f806-4417-961a-cf4390f4b4a5] Running
	I1123 09:26:09.421421  332015 system_pods.go:89] "kube-proxy-9qgbr" [a03beba1-4074-45e0-a3a0-a4cf0917b9a8] Running
	I1123 09:26:09.421441  332015 system_pods.go:89] "kube-proxy-lqqmc" [81a61d2b-bb1b-46d7-9acc-035150e8061b] Running
	I1123 09:26:09.421463  332015 system_pods.go:89] "kube-scheduler-ha-857095" [0598722f-31ac-4529-8b00-94c9bccf8255] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:26:09.421494  332015 system_pods.go:89] "kube-scheduler-ha-857095-m02" [0d16a804-69c1-47f1-b32c-3b35f950765f] Running
	I1123 09:26:09.421521  332015 system_pods.go:89] "kube-scheduler-ha-857095-m03" [aaf4d61f-0ec3-4e06-912a-a87fc3ab3cdb] Running
	I1123 09:26:09.421541  332015 system_pods.go:89] "kube-vip-ha-857095" [41b5690c-90a6-4557-9e9c-fcb76fe0c548] Running
	I1123 09:26:09.421562  332015 system_pods.go:89] "kube-vip-ha-857095-m02" [9c7a58ce-d823-401a-9695-36a0b87ab3ca] Running
	I1123 09:26:09.421595  332015 system_pods.go:89] "kube-vip-ha-857095-m03" [3830c657-5386-4214-a319-d42e19a40c12] Running
	I1123 09:26:09.421621  332015 system_pods.go:89] "storage-provisioner" [fd6347d8-5602-4a34-875b-811bc8ea2bc2] Running
	I1123 09:26:09.421644  332015 system_pods.go:126] duration metric: took 8.958012ms to wait for k8s-apps to be running ...
	I1123 09:26:09.421666  332015 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:26:09.421753  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:26:09.436477  332015 system_svc.go:56] duration metric: took 14.802398ms WaitForService to wait for kubelet
	I1123 09:26:09.436515  332015 kubeadm.go:587] duration metric: took 8.250000324s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:26:09.436534  332015 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:26:09.440490  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:09.440519  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:09.440532  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:09.440537  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:09.440549  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:09.440555  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:09.440563  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:09.440568  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:09.440578  332015 node_conditions.go:105] duration metric: took 4.039042ms to run NodePressure ...
	I1123 09:26:09.440592  332015 start.go:242] waiting for startup goroutines ...
	I1123 09:26:09.440627  332015 start.go:256] writing updated cluster config ...
	I1123 09:26:09.444444  332015 out.go:203] 
	I1123 09:26:09.447845  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:09.447976  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:26:09.451331  332015 out.go:179] * Starting "ha-857095-m04" worker node in "ha-857095" cluster
	I1123 09:26:09.454181  332015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:26:09.457128  332015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:26:09.459981  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:26:09.460044  332015 cache.go:65] Caching tarball of preloaded images
	I1123 09:26:09.460053  332015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:26:09.460162  332015 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:26:09.460183  332015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:26:09.460319  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:26:09.487056  332015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:26:09.487075  332015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:26:09.487099  332015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:26:09.487126  332015 start.go:360] acquireMachinesLock for ha-857095-m04: {Name:mkc778064e426bc743bab6e8fad34bbaae40e782 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:26:09.487176  332015 start.go:364] duration metric: took 35.471µs to acquireMachinesLock for "ha-857095-m04"
	I1123 09:26:09.487195  332015 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:26:09.487200  332015 fix.go:54] fixHost starting: m04
	I1123 09:26:09.487451  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m04 --format={{.State.Status}}
	I1123 09:26:09.507899  332015 fix.go:112] recreateIfNeeded on ha-857095-m04: state=Stopped err=<nil>
	W1123 09:26:09.507924  332015 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:26:09.511107  332015 out.go:252] * Restarting existing docker container for "ha-857095-m04" ...
	I1123 09:26:09.511253  332015 cli_runner.go:164] Run: docker start ha-857095-m04
	I1123 09:26:09.866032  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m04 --format={{.State.Status}}
	I1123 09:26:09.896315  332015 kic.go:430] container "ha-857095-m04" state is running.
	I1123 09:26:09.896669  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m04
	I1123 09:26:09.920856  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:26:09.921084  332015 machine.go:94] provisionDockerMachine start ...
	I1123 09:26:09.921148  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:09.953275  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:26:09.953746  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1123 09:26:09.953766  332015 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:26:09.954414  332015 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 09:26:13.177535  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m04
	
	I1123 09:26:13.177568  332015 ubuntu.go:182] provisioning hostname "ha-857095-m04"
	I1123 09:26:13.177640  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:13.208850  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:26:13.209159  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1123 09:26:13.209176  332015 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-857095-m04 && echo "ha-857095-m04" | sudo tee /etc/hostname
	I1123 09:26:13.425765  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m04
	
	I1123 09:26:13.425859  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:13.460720  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:26:13.461034  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1123 09:26:13.461061  332015 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857095-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857095-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857095-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:26:13.666205  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:26:13.666234  332015 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:26:13.666252  332015 ubuntu.go:190] setting up certificates
	I1123 09:26:13.666263  332015 provision.go:84] configureAuth start
	I1123 09:26:13.666323  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m04
	I1123 09:26:13.699046  332015 provision.go:143] copyHostCerts
	I1123 09:26:13.699100  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:26:13.699136  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:26:13.699149  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:26:13.699242  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:26:13.699332  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:26:13.699356  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:26:13.699365  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:26:13.699394  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:26:13.699443  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:26:13.699466  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:26:13.699475  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:26:13.699504  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:26:13.699558  332015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.ha-857095-m04 san=[127.0.0.1 192.168.49.5 ha-857095-m04 localhost minikube]
	I1123 09:26:13.947128  332015 provision.go:177] copyRemoteCerts
	I1123 09:26:13.947199  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:26:13.947245  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:13.964666  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:14.108546  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1123 09:26:14.108614  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:26:14.147222  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1123 09:26:14.147298  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:26:14.174245  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1123 09:26:14.174323  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:26:14.202367  332015 provision.go:87] duration metric: took 536.081268ms to configureAuth
	I1123 09:26:14.202398  332015 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:26:14.202692  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:14.202823  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:14.228826  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:26:14.229151  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1123 09:26:14.229165  332015 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:26:14.698077  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:26:14.698147  332015 machine.go:97] duration metric: took 4.777046451s to provisionDockerMachine
	I1123 09:26:14.698176  332015 start.go:293] postStartSetup for "ha-857095-m04" (driver="docker")
	I1123 09:26:14.698221  332015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:26:14.698305  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:26:14.698371  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:14.723686  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:14.851030  332015 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:26:14.858337  332015 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:26:14.858362  332015 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:26:14.858374  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:26:14.858433  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:26:14.858508  332015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:26:14.858515  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /etc/ssl/certs/2849042.pem
	I1123 09:26:14.858611  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:26:14.870806  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:26:14.904225  332015 start.go:296] duration metric: took 206.013245ms for postStartSetup
	I1123 09:26:14.904312  332015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:26:14.904357  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:14.925549  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:15.048457  332015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:26:15.064072  332015 fix.go:56] duration metric: took 5.57686319s for fixHost
	I1123 09:26:15.064101  332015 start.go:83] releasing machines lock for "ha-857095-m04", held for 5.576912749s
	I1123 09:26:15.064189  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m04
	I1123 09:26:15.099935  332015 out.go:179] * Found network options:
	I1123 09:26:15.102733  332015 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1123 09:26:15.105537  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105581  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105592  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105615  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105625  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105635  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	I1123 09:26:15.105709  332015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:26:15.105751  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:15.106052  332015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:26:15.106106  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:15.139318  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:15.143260  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:15.438462  332015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:26:15.444861  332015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:26:15.444936  332015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:26:15.465823  332015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:26:15.465847  332015 start.go:496] detecting cgroup driver to use...
	I1123 09:26:15.465876  332015 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:26:15.465925  332015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:26:15.496588  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:26:15.514577  332015 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:26:15.514673  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:26:15.534950  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:26:15.548709  332015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:26:15.754867  332015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:26:15.954809  332015 docker.go:234] disabling docker service ...
	I1123 09:26:15.954903  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:26:15.979986  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:26:15.995201  332015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:26:16.195305  332015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:26:16.373235  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:26:16.389735  332015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:26:16.410006  332015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:26:16.410174  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.419483  332015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:26:16.419592  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.428394  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.444114  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.463213  332015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:26:16.471981  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.480994  332015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.489302  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.498210  332015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:26:16.508001  332015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:26:16.516953  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:16.726052  332015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:26:16.986187  332015 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:26:16.986301  332015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:26:16.994949  332015 start.go:564] Will wait 60s for crictl version
	I1123 09:26:16.995057  332015 ssh_runner.go:195] Run: which crictl
	I1123 09:26:17.005848  332015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:26:17.068139  332015 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:26:17.068261  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:26:17.123372  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:26:17.173210  332015 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:26:17.176207  332015 out.go:179]   - env NO_PROXY=192.168.49.2
	I1123 09:26:17.179404  332015 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1123 09:26:17.182767  332015 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1123 09:26:17.185787  332015 cli_runner.go:164] Run: docker network inspect ha-857095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:26:17.204073  332015 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:26:17.207997  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:26:17.218002  332015 mustload.go:66] Loading cluster: ha-857095
	I1123 09:26:17.218249  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:17.218496  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:26:17.246745  332015 host.go:66] Checking if "ha-857095" exists ...
	I1123 09:26:17.247017  332015 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095 for IP: 192.168.49.5
	I1123 09:26:17.247024  332015 certs.go:195] generating shared ca certs ...
	I1123 09:26:17.247040  332015 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:26:17.247177  332015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:26:17.247217  332015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:26:17.247228  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1123 09:26:17.247241  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1123 09:26:17.247254  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1123 09:26:17.247265  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1123 09:26:17.247315  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:26:17.247346  332015 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:26:17.247354  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:26:17.247382  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:26:17.247406  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:26:17.247429  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:26:17.247473  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:26:17.247504  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /usr/share/ca-certificates/2849042.pem
	I1123 09:26:17.247517  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:17.247527  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem -> /usr/share/ca-certificates/284904.pem
	I1123 09:26:17.247544  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:26:17.302193  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:26:17.327160  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:26:17.353974  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:26:17.377204  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:26:17.403460  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:26:17.423323  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:26:17.448832  332015 ssh_runner.go:195] Run: openssl version
	I1123 09:26:17.456781  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:26:17.467249  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:26:17.472303  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:26:17.472418  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:26:17.523101  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:26:17.535534  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:26:17.546862  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:17.552603  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:17.552699  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:17.599146  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:26:17.610235  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:26:17.618699  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:26:17.623313  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:26:17.623432  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:26:17.676492  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:26:17.685680  332015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:26:17.690257  332015 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:26:17.690334  332015 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1123 09:26:17.690451  332015 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-857095-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:26:17.690571  332015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:26:17.699579  332015 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:26:17.699678  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1123 09:26:17.711806  332015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 09:26:17.726908  332015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:26:17.741366  332015 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1123 09:26:17.745929  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:26:17.756453  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:17.960408  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:26:17.989357  332015 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1123 09:26:17.989946  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:17.994531  332015 out.go:179] * Verifying Kubernetes components...
	I1123 09:26:17.998123  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:18.239793  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:26:18.262774  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1123 09:26:18.262843  332015 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1123 09:26:18.263099  332015 node_ready.go:35] waiting up to 6m0s for node "ha-857095-m04" to be "Ready" ...
	I1123 09:26:18.269812  332015 node_ready.go:49] node "ha-857095-m04" is "Ready"
	I1123 09:26:18.269839  332015 node_ready.go:38] duration metric: took 6.727383ms for node "ha-857095-m04" to be "Ready" ...
	I1123 09:26:18.269854  332015 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:26:18.269907  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:26:18.288660  332015 system_svc.go:56] duration metric: took 18.797608ms WaitForService to wait for kubelet
	I1123 09:26:18.288686  332015 kubeadm.go:587] duration metric: took 299.282478ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:26:18.288702  332015 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:26:18.292995  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:18.293021  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:18.293032  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:18.293037  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:18.293042  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:18.293046  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:18.293051  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:18.293055  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:18.293059  332015 node_conditions.go:105] duration metric: took 4.352482ms to run NodePressure ...
	I1123 09:26:18.293072  332015 start.go:242] waiting for startup goroutines ...
	I1123 09:26:18.293094  332015 start.go:256] writing updated cluster config ...
	I1123 09:26:18.293459  332015 ssh_runner.go:195] Run: rm -f paused
	I1123 09:26:18.297614  332015 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:26:18.298096  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:26:18.325623  332015 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gqskt" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:26:20.334064  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:22.832313  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:24.834199  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:27.335305  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:29.831965  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:31.861015  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	I1123 09:26:32.333037  332015 pod_ready.go:94] pod "coredns-66bc5c9577-gqskt" is "Ready"
	I1123 09:26:32.333066  332015 pod_ready.go:86] duration metric: took 14.007410196s for pod "coredns-66bc5c9577-gqskt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.333077  332015 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kqvhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.338930  332015 pod_ready.go:94] pod "coredns-66bc5c9577-kqvhl" is "Ready"
	I1123 09:26:32.338959  332015 pod_ready.go:86] duration metric: took 5.876773ms for pod "coredns-66bc5c9577-kqvhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.342889  332015 pod_ready.go:83] waiting for pod "etcd-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.354954  332015 pod_ready.go:94] pod "etcd-ha-857095" is "Ready"
	I1123 09:26:32.354982  332015 pod_ready.go:86] duration metric: took 12.06568ms for pod "etcd-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.354992  332015 pod_ready.go:83] waiting for pod "etcd-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.360908  332015 pod_ready.go:94] pod "etcd-ha-857095-m02" is "Ready"
	I1123 09:26:32.360988  332015 pod_ready.go:86] duration metric: took 5.989209ms for pod "etcd-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.361006  332015 pod_ready.go:83] waiting for pod "etcd-ha-857095-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.527237  332015 request.go:683] "Waited before sending request" delay="163.188719ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095-m03"
	I1123 09:26:32.531141  332015 pod_ready.go:94] pod "etcd-ha-857095-m03" is "Ready"
	I1123 09:26:32.531176  332015 pod_ready.go:86] duration metric: took 170.163678ms for pod "etcd-ha-857095-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.727633  332015 request.go:683] "Waited before sending request" delay="196.333255ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1123 09:26:32.731295  332015 pod_ready.go:83] waiting for pod "kube-apiserver-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.927721  332015 request.go:683] "Waited before sending request" delay="196.318551ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857095"
	I1123 09:26:33.127610  332015 request.go:683] "Waited before sending request" delay="196.351881ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	I1123 09:26:33.131377  332015 pod_ready.go:94] pod "kube-apiserver-ha-857095" is "Ready"
	I1123 09:26:33.131404  332015 pod_ready.go:86] duration metric: took 400.08428ms for pod "kube-apiserver-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:33.131415  332015 pod_ready.go:83] waiting for pod "kube-apiserver-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:33.326734  332015 request.go:683] "Waited before sending request" delay="195.246384ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857095-m02"
	I1123 09:26:33.527259  332015 request.go:683] "Waited before sending request" delay="197.325627ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095-m02"
	I1123 09:26:33.531408  332015 pod_ready.go:94] pod "kube-apiserver-ha-857095-m02" is "Ready"
	I1123 09:26:33.531476  332015 pod_ready.go:86] duration metric: took 400.053592ms for pod "kube-apiserver-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:33.531510  332015 pod_ready.go:83] waiting for pod "kube-apiserver-ha-857095-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:33.726854  332015 request.go:683] "Waited before sending request" delay="195.24293ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857095-m03"
	I1123 09:26:33.927056  332015 request.go:683] "Waited before sending request" delay="196.304447ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095-m03"
	I1123 09:26:33.930670  332015 pod_ready.go:94] pod "kube-apiserver-ha-857095-m03" is "Ready"
	I1123 09:26:33.930738  332015 pod_ready.go:86] duration metric: took 399.207142ms for pod "kube-apiserver-ha-857095-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:34.127173  332015 request.go:683] "Waited before sending request" delay="196.311848ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1123 09:26:34.131888  332015 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:34.327442  332015 request.go:683] "Waited before sending request" delay="195.421664ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857095"
	I1123 09:26:34.526909  332015 request.go:683] "Waited before sending request" delay="195.121754ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	I1123 09:26:34.727795  332015 request.go:683] "Waited before sending request" delay="95.293534ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857095"
	I1123 09:26:34.926808  332015 request.go:683] "Waited before sending request" delay="192.288691ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	I1123 09:26:35.326671  332015 request.go:683] "Waited before sending request" delay="190.240931ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	I1123 09:26:35.727087  332015 request.go:683] "Waited before sending request" delay="90.213857ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	W1123 09:26:36.147664  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:38.639668  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:41.138106  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:43.639146  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:46.140223  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:48.638331  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:50.639066  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	I1123 09:26:51.639670  332015 pod_ready.go:94] pod "kube-controller-manager-ha-857095" is "Ready"
	I1123 09:26:51.639700  332015 pod_ready.go:86] duration metric: took 17.507743609s for pod "kube-controller-manager-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:51.639710  332015 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:26:53.652573  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:26:56.146503  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:26:58.147735  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:00.225967  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:02.647589  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:04.647752  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:07.153585  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:09.646738  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:12.145665  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:14.146292  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:16.646315  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:18.649017  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:20.649200  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:23.146376  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:25.147713  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:27.646124  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:29.646694  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:32.147157  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:34.647065  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:37.145928  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:39.149680  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:41.646227  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:43.648098  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:46.145963  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:48.146438  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:50.147240  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:52.647369  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:55.146780  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:57.649707  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:00.227209  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:02.646807  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:04.646959  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:07.146296  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:09.646937  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:11.648675  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:14.146286  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:16.646924  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:18.651084  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:21.147312  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:23.646217  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:25.646310  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:27.646958  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:30.146762  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:32.647802  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:35.146446  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:37.147422  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:39.647209  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:42.147709  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:44.646580  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:47.146583  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:49.646857  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:51.647231  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:54.147109  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:56.646513  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:58.646743  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:00.647210  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:03.146363  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:05.146523  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:07.147002  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:09.647653  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:12.146246  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:14.146687  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:16.157442  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:18.649348  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:21.146242  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:23.146404  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:25.646842  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:27.647159  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:29.647890  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:32.147183  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:34.647714  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:37.146420  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:39.146792  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:41.646176  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:43.646530  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:46.147106  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:48.149876  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:50.646833  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:53.145934  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:55.147151  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:57.646423  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:59.646898  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:01.651276  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:04.146294  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:06.150790  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:08.648014  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:11.147652  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:13.646274  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:16.147137  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	I1123 09:30:18.297999  332015 pod_ready.go:86] duration metric: took 3m26.658254957s for pod "kube-controller-manager-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:30:18.298033  332015 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1123 09:30:18.298048  332015 pod_ready.go:40] duration metric: took 4m0.000406947s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:30:18.301156  332015 out.go:203] 
	W1123 09:30:18.304209  332015 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1123 09:30:18.307045  332015 out.go:203] 
	
	
	==> CRI-O <==
	Nov 23 09:26:20 ha-857095 crio[665]: time="2025-11-23T09:26:20.862323332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:26:20 ha-857095 crio[665]: time="2025-11-23T09:26:20.881754859Z" level=info msg="Created container 1e1332977cad9649cc196ae764ff285705d33ea97901ac8989363521003e0c1c: kube-system/storage-provisioner/storage-provisioner" id=d5a8a1d3-4e58-4349-a0ad-0995b7140043 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:26:20 ha-857095 crio[665]: time="2025-11-23T09:26:20.883093786Z" level=info msg="Starting container: 1e1332977cad9649cc196ae764ff285705d33ea97901ac8989363521003e0c1c" id=712d93bb-15d4-4499-9419-0be2273a15bf name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:26:20 ha-857095 crio[665]: time="2025-11-23T09:26:20.886919608Z" level=info msg="Started container" PID=1464 containerID=1e1332977cad9649cc196ae764ff285705d33ea97901ac8989363521003e0c1c description=kube-system/storage-provisioner/storage-provisioner id=712d93bb-15d4-4499-9419-0be2273a15bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=89473e76d12005c3f55b49ecc42454c1ef67be9260b26ec4b676fd34debc0d80
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.460338709Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.479184228Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.479355167Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.479441683Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.492946689Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.492986977Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.493010509Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.520498311Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.520535941Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.520557766Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.531467208Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.531504222Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.461905305Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4d71707e-340a-471c-a17c-392bda308647 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.463102323Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=cdc30ef2-5502-4f37-bc6a-387205d0372f name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.46432019Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-857095/kube-controller-manager" id=e9e73fad-72e8-426c-b874-4cc1bd49e392 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.464442185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.472179073Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.475537935Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.496575929Z" level=info msg="Created container 42babfae983262eb923a97314d7a3b093122d61af813a84a2bcf0956e5326956: kube-system/kube-controller-manager-ha-857095/kube-controller-manager" id=e9e73fad-72e8-426c-b874-4cc1bd49e392 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.497241413Z" level=info msg="Starting container: 42babfae983262eb923a97314d7a3b093122d61af813a84a2bcf0956e5326956" id=2a3e48b5-355d-4798-aaf7-7f6b61e0dc6c name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.499360139Z" level=info msg="Started container" PID=1519 containerID=42babfae983262eb923a97314d7a3b093122d61af813a84a2bcf0956e5326956 description=kube-system/kube-controller-manager-ha-857095/kube-controller-manager id=2a3e48b5-355d-4798-aaf7-7f6b61e0dc6c name=/runtime.v1.RuntimeService/StartContainer sandboxID=59c02939558c0de2a773da5e9f43cad9b5fb72908c248e77f86ee19f370077a6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	42babfae98326       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   3 minutes ago       Running             kube-controller-manager   5                   59c02939558c0       kube-controller-manager-ha-857095   kube-system
	1e1332977cad9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   3 minutes ago       Running             storage-provisioner       2                   89473e76d1200       storage-provisioner                 kube-system
	f05afc1b0445e       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   4 minutes ago       Running             busybox                   1                   4a02c866c3881       busybox-7b57f96db7-jr7sx            default
	f9fc6c6a40826       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   1                   37bcc6634aaea       coredns-66bc5c9577-kqvhl            kube-system
	87aec09c596b0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 minutes ago       Running             kube-proxy                1                   0282e268e1c22       kube-proxy-9qgbr                    kube-system
	d01764f14c48f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       1                   89473e76d1200       storage-provisioner                 kube-system
	6b76bdb0dc741       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   1                   c56d4acdc2234       coredns-66bc5c9577-gqskt            kube-system
	44a90d22da14b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 minutes ago       Running             kindnet-cni               1                   397455ea01fe1       kindnet-r7p2c                       kube-system
	0a33af9e8b2a4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   4 minutes ago       Exited              kube-controller-manager   4                   59c02939558c0       kube-controller-manager-ha-857095   kube-system
	20bdce066bf2b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   5 minutes ago       Running             kube-apiserver            2                   8ef118042f73c       kube-apiserver-ha-857095            kube-system
	87647aaa5cefc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            1                   8ef118042f73c       kube-apiserver-ha-857095            kube-system
	9e42b9253fb8b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   6 minutes ago       Running             etcd                      1                   6fc84b4ecc8df       etcd-ha-857095                      kube-system
	99df51d331941       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   6 minutes ago       Running             kube-vip                  0                   d5e7755420e7c       kube-vip-ha-857095                  kube-system
	ae37103ec6813       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   6 minutes ago       Running             kube-scheduler            1                   8a4c5a79b6a82       kube-scheduler-ha-857095            kube-system
	
	
	==> coredns [6b76bdb0dc741434ecf605ce04cd2bb3aa3ad5985dd29cb11b1af0d9172d8676] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56838 - 9966 "HINFO IN 284337624056944186.8766603808723713126. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.038775907s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f9fc6c6a4082694b13ca579cc6787e448aa81ab706e072c7930725c06097556b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40754 - 52654 "HINFO IN 1059978023450998029.7253782516717518684. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021959119s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-857095
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-857095
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=ha-857095
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_18_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:18:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857095
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:30:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:30:13 +0000   Sun, 23 Nov 2025 09:18:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:30:13 +0000   Sun, 23 Nov 2025 09:18:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:30:13 +0000   Sun, 23 Nov 2025 09:18:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:30:13 +0000   Sun, 23 Nov 2025 09:25:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-857095
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                eb10d252-491a-4fd2-89b0-513efb8fdf15
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jr7sx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 coredns-66bc5c9577-gqskt             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 coredns-66bc5c9577-kqvhl             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-ha-857095                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-r7p2c                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-857095             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-857095    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9qgbr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-857095             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-857095                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m29s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-857095 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-857095 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-857095 status is now: NodeHasSufficientMemory
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node ha-857095 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node ha-857095 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node ha-857095 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           11m                    node-controller  Node ha-857095 event: Registered Node ha-857095 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-857095 event: Registered Node ha-857095 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-857095 status is now: NodeReady
	  Normal   RegisteredNode           9m31s                  node-controller  Node ha-857095 event: Registered Node ha-857095 in Controller
	  Normal   RegisteredNode           6m44s                  node-controller  Node ha-857095 event: Registered Node ha-857095 in Controller
	  Normal   NodeHasSufficientMemory  6m17s (x8 over 6m17s)  kubelet          Node ha-857095 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m17s (x8 over 6m17s)  kubelet          Node ha-857095 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m17s (x8 over 6m17s)  kubelet          Node ha-857095 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m1s                   node-controller  Node ha-857095 event: Registered Node ha-857095 in Controller
	  Normal   RegisteredNode           3m44s                  node-controller  Node ha-857095 event: Registered Node ha-857095 in Controller
	
	
	Name:               ha-857095-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-857095-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=ha-857095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_23T09_19_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:19:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857095-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:30:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:28:13 +0000   Sun, 23 Nov 2025 09:19:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:28:13 +0000   Sun, 23 Nov 2025 09:19:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:28:13 +0000   Sun, 23 Nov 2025 09:19:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:28:13 +0000   Sun, 23 Nov 2025 09:20:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-857095-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                086daa3d-fd9f-4e74-8f1b-3235f7c68f88
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-ltgrn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 etcd-ha-857095-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-v5cch                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-857095-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-857095-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-275zc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-857095-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-857095-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m28s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   CIDRAssignmentFailed     10m                    cidrAllocator    Node ha-857095-m02 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           10m                    node-controller  Node ha-857095-m02 event: Registered Node ha-857095-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-857095-m02 event: Registered Node ha-857095-m02 in Controller
	  Normal   RegisteredNode           9m31s                  node-controller  Node ha-857095-m02 event: Registered Node ha-857095-m02 in Controller
	  Normal   NodeHasSufficientPID     7m21s (x8 over 7m21s)  kubelet          Node ha-857095-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m21s (x8 over 7m21s)  kubelet          Node ha-857095-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m21s (x8 over 7m21s)  kubelet          Node ha-857095-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           6m44s                  node-controller  Node ha-857095-m02 event: Registered Node ha-857095-m02 in Controller
	  Normal   NodeHasSufficientMemory  6m14s (x8 over 6m14s)  kubelet          Node ha-857095-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 6m14s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m14s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    6m14s (x8 over 6m14s)  kubelet          Node ha-857095-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m14s (x8 over 6m14s)  kubelet          Node ha-857095-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        5m14s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m1s                   node-controller  Node ha-857095-m02 event: Registered Node ha-857095-m02 in Controller
	  Normal   RegisteredNode           3m44s                  node-controller  Node ha-857095-m02 event: Registered Node ha-857095-m02 in Controller
	
	
	Name:               ha-857095-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-857095-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=ha-857095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_23T09_20_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:20:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857095-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:30:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:26:16 +0000   Sun, 23 Nov 2025 09:20:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:26:16 +0000   Sun, 23 Nov 2025 09:20:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:26:16 +0000   Sun, 23 Nov 2025 09:20:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:26:16 +0000   Sun, 23 Nov 2025 09:21:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-857095-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                fc3d9b55-35fb-49e5-827b-74bd47bacc46
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-xdt5w                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 etcd-ha-857095-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9m26s
	  kube-system                 kindnet-8bs9t                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m29s
	  kube-system                 kube-apiserver-ha-857095-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m26s
	  kube-system                 kube-controller-manager-ha-857095-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m26s
	  kube-system                 kube-proxy-6k46z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	  kube-system                 kube-scheduler-ha-857095-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m26s
	  kube-system                 kube-vip-ha-857095-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m23s                  kube-proxy       
	  Normal   Starting                 3m49s                  kube-proxy       
	  Normal   CIDRAssignmentFailed     9m29s                  cidrAllocator    Node ha-857095-m03 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           9m29s                  node-controller  Node ha-857095-m03 event: Registered Node ha-857095-m03 in Controller
	  Normal   RegisteredNode           9m28s                  node-controller  Node ha-857095-m03 event: Registered Node ha-857095-m03 in Controller
	  Normal   RegisteredNode           9m26s                  node-controller  Node ha-857095-m03 event: Registered Node ha-857095-m03 in Controller
	  Normal   RegisteredNode           6m44s                  node-controller  Node ha-857095-m03 event: Registered Node ha-857095-m03 in Controller
	  Warning  CgroupV1                 4m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 4m28s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  4m27s (x8 over 4m27s)  kubelet          Node ha-857095-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m27s (x8 over 4m27s)  kubelet          Node ha-857095-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m27s (x8 over 4m27s)  kubelet          Node ha-857095-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m1s                   node-controller  Node ha-857095-m03 event: Registered Node ha-857095-m03 in Controller
	  Normal   RegisteredNode           3m44s                  node-controller  Node ha-857095-m03 event: Registered Node ha-857095-m03 in Controller
	
	
	Name:               ha-857095-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-857095-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=ha-857095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_23T09_21_39_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:21:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857095-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:30:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:30:19 +0000   Sun, 23 Nov 2025 09:21:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:30:19 +0000   Sun, 23 Nov 2025 09:21:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:30:19 +0000   Sun, 23 Nov 2025 09:21:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:30:19 +0000   Sun, 23 Nov 2025 09:22:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-857095-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                b75a4a3a-17bf-4722-ac06-1e0fa9c0c524
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ls8hm       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m42s
	  kube-system                 kube-proxy-lqqmc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m39s                  kube-proxy       
	  Normal   Starting                 3m49s                  kube-proxy       
	  Normal   Starting                 8m42s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     8m42s (x3 over 8m42s)  kubelet          Node ha-857095-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    8m42s (x3 over 8m42s)  kubelet          Node ha-857095-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  8m42s (x3 over 8m42s)  kubelet          Node ha-857095-m04 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 8m42s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   CIDRAssignmentFailed     8m41s                  cidrAllocator    Node ha-857095-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           8m41s                  node-controller  Node ha-857095-m04 event: Registered Node ha-857095-m04 in Controller
	  Normal   RegisteredNode           8m39s                  node-controller  Node ha-857095-m04 event: Registered Node ha-857095-m04 in Controller
	  Normal   RegisteredNode           8m38s                  node-controller  Node ha-857095-m04 event: Registered Node ha-857095-m04 in Controller
	  Normal   NodeReady                8m                     kubelet          Node ha-857095-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m44s                  node-controller  Node ha-857095-m04 event: Registered Node ha-857095-m04 in Controller
	  Warning  CgroupV1                 4m9s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 4m9s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  4m6s (x8 over 4m9s)    kubelet          Node ha-857095-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m6s (x8 over 4m9s)    kubelet          Node ha-857095-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m6s (x8 over 4m9s)    kubelet          Node ha-857095-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m1s                   node-controller  Node ha-857095-m04 event: Registered Node ha-857095-m04 in Controller
	  Normal   RegisteredNode           3m44s                  node-controller  Node ha-857095-m04 event: Registered Node ha-857095-m04 in Controller
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015154] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511595] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034200] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753844] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.833249] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:37] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	[Nov23 08:56] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	[  +0.083595] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov23 09:04] overlayfs: idmapped layers are currently not supported
	[ +53.074501] overlayfs: idmapped layers are currently not supported
	[Nov23 09:18] overlayfs: idmapped layers are currently not supported
	[Nov23 09:19] overlayfs: idmapped layers are currently not supported
	[Nov23 09:20] overlayfs: idmapped layers are currently not supported
	[Nov23 09:21] overlayfs: idmapped layers are currently not supported
	[Nov23 09:22] overlayfs: idmapped layers are currently not supported
	[Nov23 09:24] overlayfs: idmapped layers are currently not supported
	[  +2.761695] overlayfs: idmapped layers are currently not supported
	[Nov23 09:25] overlayfs: idmapped layers are currently not supported
	[Nov23 09:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9e42b9253fb8b630e7dc3c1bd90335205bd4e883a1a22f51d4cb68ee751bee2f] <==
	{"level":"info","ts":"2025-11-23T09:25:57.526029Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"41d977b14da551f4","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-23T09:25:57.526157Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:25:57.526197Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:25:57.648272Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"41d977b14da551f4","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-23T09:25:57.648360Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:25:57.735464Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:25:57.744097Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"warn","ts":"2025-11-23T09:25:58.241496Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:25:58.242688Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:25:58.279089Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"41d977b14da551f4","error":"failed to dial 41d977b14da551f4 on stream MsgApp v2 (EOF)"}
	{"level":"warn","ts":"2025-11-23T09:25:58.577737Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"warn","ts":"2025-11-23T09:25:59.529477Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"warn","ts":"2025-11-23T09:26:01.751282Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"41d977b14da551f4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-23T09:26:01.751339Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"41d977b14da551f4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-23T09:26:05.752703Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"41d977b14da551f4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-23T09:26:05.752756Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"41d977b14da551f4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-23T09:26:09.754171Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"41d977b14da551f4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-23T09:26:09.754297Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"41d977b14da551f4","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-11-23T09:26:11.132581Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"41d977b14da551f4","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-23T09:26:11.132691Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:26:11.132732Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:26:11.182065Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"41d977b14da551f4","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-23T09:26:11.182187Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:26:11.251471Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:26:11.252026Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	
	
	==> kernel <==
	 09:30:20 up  2:12,  0 user,  load average: 0.48, 1.12, 1.46
	Linux ha-857095 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [44a90d22da14ba2218ba4b094e5bf35de76a3687c704587dff1bde2ca21ded04] <==
	I1123 09:29:50.463335       1 main.go:324] Node ha-857095-m04 has CIDR [10.244.3.0/24] 
	I1123 09:30:00.470292       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1123 09:30:00.470336       1 main.go:324] Node ha-857095-m04 has CIDR [10.244.3.0/24] 
	I1123 09:30:00.470626       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:30:00.470640       1 main.go:301] handling current node
	I1123 09:30:00.470655       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1123 09:30:00.470661       1 main.go:324] Node ha-857095-m02 has CIDR [10.244.1.0/24] 
	I1123 09:30:00.470730       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1123 09:30:00.470735       1 main.go:324] Node ha-857095-m03 has CIDR [10.244.2.0/24] 
	I1123 09:30:10.458780       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:30:10.458815       1 main.go:301] handling current node
	I1123 09:30:10.458830       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1123 09:30:10.458835       1 main.go:324] Node ha-857095-m02 has CIDR [10.244.1.0/24] 
	I1123 09:30:10.458995       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1123 09:30:10.459007       1 main.go:324] Node ha-857095-m03 has CIDR [10.244.2.0/24] 
	I1123 09:30:10.459074       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1123 09:30:10.459085       1 main.go:324] Node ha-857095-m04 has CIDR [10.244.3.0/24] 
	I1123 09:30:20.463179       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:30:20.463218       1 main.go:301] handling current node
	I1123 09:30:20.463234       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1123 09:30:20.463240       1 main.go:324] Node ha-857095-m02 has CIDR [10.244.1.0/24] 
	I1123 09:30:20.463384       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1123 09:30:20.463402       1 main.go:324] Node ha-857095-m03 has CIDR [10.244.2.0/24] 
	I1123 09:30:20.463484       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1123 09:30:20.463495       1 main.go:324] Node ha-857095-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [20bdce066bf2bdfda4bff2f53735c6b970c68ead5b62cf3e3e86c4b95b160933] <==
	I1123 09:25:47.278052       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 09:25:47.278070       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 09:25:47.278866       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 09:25:47.278902       1 policy_source.go:240] refreshing policies
	I1123 09:25:47.287561       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:25:47.287661       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:25:47.303439       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 09:25:47.304398       1 aggregator.go:171] initial CRD sync complete...
	I1123 09:25:47.304420       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:25:47.304427       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:25:47.304433       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:25:47.306880       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:25:47.312368       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:25:47.315842       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:25:47.331215       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1123 09:25:47.341974       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1123 09:25:47.343413       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:25:47.349727       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:25:47.356476       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1123 09:25:47.360379       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1123 09:25:48.471838       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 09:25:48.681723       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 09:25:48.681779       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:25:49.536933       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1123 09:25:49.694752       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	
	
	==> kube-apiserver [87647aaa5cefc0905445d50290aa43a681b39b2952b4b76e62eebbf3bc28afa7] <==
	{"level":"warn","ts":"2025-11-23T09:25:06.061170Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a10780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.061187Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40009b7a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.061199Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a11a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.061214Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018883c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.061228Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a14b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.061240Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021af0e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.061254Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a143c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.064223Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001717c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.064544Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a110e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.064838Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016b0780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.064930Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014a3a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065031Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40020b6f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065161Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400147da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065226Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026a6960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065286Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400147d0e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065500Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014a30e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065635Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026a7860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065704Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018892c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065649Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400237a3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065768Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021ae000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065840Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400237a3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065890Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40022f65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065934Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400253ab40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065896Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400237a3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	F1123 09:25:12.514838       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1] <==
	I1123 09:25:37.948675       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:25:38.833382       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1123 09:25:38.833431       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:25:38.836291       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1123 09:25:38.837318       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1123 09:25:38.837465       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:25:38.837539       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1123 09:25:48.855602       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [42babfae983262eb923a97314d7a3b093122d61af813a84a2bcf0956e5326956] <==
	I1123 09:26:36.391739       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:26:36.394028       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 09:26:36.398532       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:26:36.398603       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-857095-m04"
	I1123 09:26:36.399061       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:26:36.405978       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 09:26:36.410273       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:26:36.413789       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:26:36.414983       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:26:36.417314       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:26:36.426520       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:26:36.431782       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:26:36.431918       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:26:36.432025       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-857095"
	I1123 09:26:36.432076       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-857095-m02"
	I1123 09:26:36.432103       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-857095-m03"
	I1123 09:26:36.432194       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-857095-m04"
	I1123 09:26:36.432429       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 09:26:36.438901       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:26:36.439030       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:26:36.439208       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:26:36.440406       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:26:36.440861       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:26:36.448935       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:26:36.450716       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	
	
	==> kube-proxy [87aec09c596b09d0dbca59c7079a492763b5c52c19573dc16282f6cb518a9e7e] <==
	I1123 09:25:50.879789       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:25:51.026823       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:25:51.134697       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:25:51.136508       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 09:25:51.136724       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:25:51.202997       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:25:51.203052       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:25:51.216346       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:25:51.216642       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:25:51.216660       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:25:51.218354       1 config.go:200] "Starting service config controller"
	I1123 09:25:51.218379       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:25:51.218397       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:25:51.218402       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:25:51.218413       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:25:51.218417       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:25:51.219083       1 config.go:309] "Starting node config controller"
	I1123 09:25:51.219104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:25:51.219111       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:25:51.318740       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:25:51.318794       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:25:51.318870       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ae37103ec68135e4d1b955e8ad30e29e8d9e94f916f7903941858b029829d4fa] <==
	E1123 09:24:52.835206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:24:54.323132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:24:55.524260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:24:56.752400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:24:58.592574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 09:24:58.667983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:24:59.382668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:25:18.637026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:25:19.825775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:25:20.082695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:25:20.832268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:25:24.289699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:25:24.449453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:25:25.570813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:25:26.094376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:25:27.437175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:25:27.792384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:25:29.554608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:25:33.363560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:25:37.619850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:25:37.960772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:25:38.292787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:25:39.668574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:25:40.525135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1123 09:25:47.766169       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.487295     802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4f419f5-ecbc-48e6-8f98-732c4ac5a977-xtables-lock\") pod \"kindnet-r7p2c\" (UID: \"a4f419f5-ecbc-48e6-8f98-732c4ac5a977\") " pod="kube-system/kindnet-r7p2c"
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.543395     802 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-857095"
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.543435     802 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-857095"
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.576170     802 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.718394     802 scope.go:117] "RemoveContainer" containerID="53b6dc95eaa49c07b80a3c7bd2747da0109e5512392b5c622ebfb42a3ff35637"
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.719092     802 scope.go:117] "RemoveContainer" containerID="0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1"
	Nov 23 09:25:49 ha-857095 kubelet[802]: E1123 09:25:49.719314     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857095_kube-system(00c64a04113bd2caba88f2fd71957641)\"" pod="kube-system/kube-controller-manager-ha-857095" podUID="00c64a04113bd2caba88f2fd71957641"
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.728466     802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-857095" podStartSLOduration=0.728448114 podStartE2EDuration="728.448114ms" podCreationTimestamp="2025-11-23 09:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:25:49.692474502 +0000 UTC m=+106.440478825" watchObservedRunningTime="2025-11-23 09:25:49.728448114 +0000 UTC m=+106.476452437"
	Nov 23 09:25:49 ha-857095 kubelet[802]: W1123 09:25:49.949907     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/crio-0282e268e1c225d6584a360fa3666cd3b05fe5e4ae10a25f2468beb3ffa25fbd WatchSource:0}: Error finding container 0282e268e1c225d6584a360fa3666cd3b05fe5e4ae10a25f2468beb3ffa25fbd: Status 404 returned error can't find the container with id 0282e268e1c225d6584a360fa3666cd3b05fe5e4ae10a25f2468beb3ffa25fbd
	Nov 23 09:25:49 ha-857095 kubelet[802]: W1123 09:25:49.970465     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/crio-37bcc6634aaea4d8960df76401e753994761c739db1ba0d3445df4971e6c8476 WatchSource:0}: Error finding container 37bcc6634aaea4d8960df76401e753994761c739db1ba0d3445df4971e6c8476: Status 404 returned error can't find the container with id 37bcc6634aaea4d8960df76401e753994761c739db1ba0d3445df4971e6c8476
	Nov 23 09:25:50 ha-857095 kubelet[802]: W1123 09:25:50.102777     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/crio-4a02c866c3881ec97506e3c268b9fe4e509c859dea6c3f78578fa9a6f040c9cc WatchSource:0}: Error finding container 4a02c866c3881ec97506e3c268b9fe4e509c859dea6c3f78578fa9a6f040c9cc: Status 404 returned error can't find the container with id 4a02c866c3881ec97506e3c268b9fe4e509c859dea6c3f78578fa9a6f040c9cc
	Nov 23 09:25:51 ha-857095 kubelet[802]: I1123 09:25:51.422866     802 scope.go:117] "RemoveContainer" containerID="0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1"
	Nov 23 09:25:51 ha-857095 kubelet[802]: E1123 09:25:51.423566     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857095_kube-system(00c64a04113bd2caba88f2fd71957641)\"" pod="kube-system/kube-controller-manager-ha-857095" podUID="00c64a04113bd2caba88f2fd71957641"
	Nov 23 09:25:51 ha-857095 kubelet[802]: I1123 09:25:51.764121     802 scope.go:117] "RemoveContainer" containerID="0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1"
	Nov 23 09:25:51 ha-857095 kubelet[802]: E1123 09:25:51.764277     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857095_kube-system(00c64a04113bd2caba88f2fd71957641)\"" pod="kube-system/kube-controller-manager-ha-857095" podUID="00c64a04113bd2caba88f2fd71957641"
	Nov 23 09:26:03 ha-857095 kubelet[802]: E1123 09:26:03.376321     802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f78c0f38108822697ec35fb515af63cbe74822d5919dc7de72b9b416923926\": container with ID starting with e1f78c0f38108822697ec35fb515af63cbe74822d5919dc7de72b9b416923926 not found: ID does not exist" containerID="e1f78c0f38108822697ec35fb515af63cbe74822d5919dc7de72b9b416923926"
	Nov 23 09:26:03 ha-857095 kubelet[802]: I1123 09:26:03.376378     802 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="e1f78c0f38108822697ec35fb515af63cbe74822d5919dc7de72b9b416923926" err="rpc error: code = NotFound desc = could not find container \"e1f78c0f38108822697ec35fb515af63cbe74822d5919dc7de72b9b416923926\": container with ID starting with e1f78c0f38108822697ec35fb515af63cbe74822d5919dc7de72b9b416923926 not found: ID does not exist"
	Nov 23 09:26:03 ha-857095 kubelet[802]: E1123 09:26:03.434387     802 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2828455af2abf3ed01ffea7b324458e4f00c51da375d485188a001929b1e774a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2828455af2abf3ed01ffea7b324458e4f00c51da375d485188a001929b1e774a/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-857095_00c64a04113bd2caba88f2fd71957641/kube-controller-manager/3.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-857095_00c64a04113bd2caba88f2fd71957641/kube-controller-manager/3.log: no such file or directory
	Nov 23 09:26:03 ha-857095 kubelet[802]: E1123 09:26:03.440783     802 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/048a676d17b8beaf191cb14c2716a86bde8600dcabf403282009e019ff371098/diff" to get inode usage: stat /var/lib/containers/storage/overlay/048a676d17b8beaf191cb14c2716a86bde8600dcabf403282009e019ff371098/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-857095_00c64a04113bd2caba88f2fd71957641/kube-controller-manager/2.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-857095_00c64a04113bd2caba88f2fd71957641/kube-controller-manager/2.log: no such file or directory
	Nov 23 09:26:05 ha-857095 kubelet[802]: I1123 09:26:05.461096     802 scope.go:117] "RemoveContainer" containerID="0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1"
	Nov 23 09:26:05 ha-857095 kubelet[802]: E1123 09:26:05.461342     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857095_kube-system(00c64a04113bd2caba88f2fd71957641)\"" pod="kube-system/kube-controller-manager-ha-857095" podUID="00c64a04113bd2caba88f2fd71957641"
	Nov 23 09:26:19 ha-857095 kubelet[802]: I1123 09:26:19.461244     802 scope.go:117] "RemoveContainer" containerID="0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1"
	Nov 23 09:26:19 ha-857095 kubelet[802]: E1123 09:26:19.462321     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857095_kube-system(00c64a04113bd2caba88f2fd71957641)\"" pod="kube-system/kube-controller-manager-ha-857095" podUID="00c64a04113bd2caba88f2fd71957641"
	Nov 23 09:26:20 ha-857095 kubelet[802]: I1123 09:26:20.849244     802 scope.go:117] "RemoveContainer" containerID="d01764f14c48facfa6e2f2a116b511c2ae876c073a208e73e2fd13c40f370017"
	Nov 23 09:26:33 ha-857095 kubelet[802]: I1123 09:26:33.461191     802 scope.go:117] "RemoveContainer" containerID="0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-857095 -n ha-857095
helpers_test.go:269: (dbg) Run:  kubectl --context ha-857095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (413.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-857095" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-857095\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-857095\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-857095\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"reg
istry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticI
P\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-857095
helpers_test.go:243: (dbg) docker inspect ha-857095:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8",
	        "Created": "2025-11-23T09:18:21.765330623Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 332137,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:23:56.467420574Z",
	            "FinishedAt": "2025-11-23T09:23:55.842197436Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/hosts",
	        "LogPath": "/var/lib/docker/containers/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8-json.log",
	        "Name": "/ha-857095",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-857095:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-857095",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8",
	                "LowerDir": "/var/lib/docker/overlay2/7b0839d24d2a3baaaf22d9c15821d50414819cc142231fd0b30407a9910e5b2a-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b0839d24d2a3baaaf22d9c15821d50414819cc142231fd0b30407a9910e5b2a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b0839d24d2a3baaaf22d9c15821d50414819cc142231fd0b30407a9910e5b2a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b0839d24d2a3baaaf22d9c15821d50414819cc142231fd0b30407a9910e5b2a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-857095",
	                "Source": "/var/lib/docker/volumes/ha-857095/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-857095",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-857095",
	                "name.minikube.sigs.k8s.io": "ha-857095",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c0efb14adc07b4c286adcc93e164cef6836115ca98a2993e2ff3c5210cff68f1",
	            "SandboxKey": "/var/run/docker/netns/c0efb14adc07",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-857095": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:66:b1:d7:8d:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d56166f18c3a11f7c4d9e5d1ffa88fcabe405ba7af460096f6e964bfe85cc560",
	                    "EndpointID": "fb53906039e958dd0bfc9dec4873b4afafd8f1e971bdb75d9d0cf827c82fb8d3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-857095",
	                        "8497a55e0a4e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-857095 -n ha-857095
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 logs -n 25: (1.423467503s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-857095 ssh -n ha-857095-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m02 sudo cat /home/docker/cp-test_ha-857095-m03_ha-857095-m02.txt                                         │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ cp      │ ha-857095 cp ha-857095-m03:/home/docker/cp-test.txt ha-857095-m04:/home/docker/cp-test_ha-857095-m03_ha-857095-m04.txt               │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m04 sudo cat /home/docker/cp-test_ha-857095-m03_ha-857095-m04.txt                                         │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ cp      │ ha-857095 cp testdata/cp-test.txt ha-857095-m04:/home/docker/cp-test.txt                                                             │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ cp      │ ha-857095 cp ha-857095-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1815903833/001/cp-test_ha-857095-m04.txt │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ cp      │ ha-857095 cp ha-857095-m04:/home/docker/cp-test.txt ha-857095:/home/docker/cp-test_ha-857095-m04_ha-857095.txt                       │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095 sudo cat /home/docker/cp-test_ha-857095-m04_ha-857095.txt                                                 │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ cp      │ ha-857095 cp ha-857095-m04:/home/docker/cp-test.txt ha-857095-m02:/home/docker/cp-test_ha-857095-m04_ha-857095-m02.txt               │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m02 sudo cat /home/docker/cp-test_ha-857095-m04_ha-857095-m02.txt                                         │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ cp      │ ha-857095 cp ha-857095-m04:/home/docker/cp-test.txt ha-857095-m03:/home/docker/cp-test_ha-857095-m04_ha-857095-m03.txt               │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ ssh     │ ha-857095 ssh -n ha-857095-m03 sudo cat /home/docker/cp-test_ha-857095-m04_ha-857095-m03.txt                                         │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ node    │ ha-857095 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:22 UTC │
	│ node    │ ha-857095 node start m02 --alsologtostderr -v 5                                                                                      │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:22 UTC │ 23 Nov 25 09:23 UTC │
	│ node    │ ha-857095 node list --alsologtostderr -v 5                                                                                           │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:23 UTC │                     │
	│ stop    │ ha-857095 stop --alsologtostderr -v 5                                                                                                │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:23 UTC │ 23 Nov 25 09:23 UTC │
	│ start   │ ha-857095 start --wait true --alsologtostderr -v 5                                                                                   │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:23 UTC │                     │
	│ node    │ ha-857095 node list --alsologtostderr -v 5                                                                                           │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:30 UTC │                     │
	│ node    │ ha-857095 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-857095 │ jenkins │ v1.37.0 │ 23 Nov 25 09:30 UTC │ 23 Nov 25 09:30 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:23:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:23:56.195666  332015 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:23:56.195782  332015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:23:56.195793  332015 out.go:374] Setting ErrFile to fd 2...
	I1123 09:23:56.195799  332015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:23:56.196022  332015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:23:56.196372  332015 out.go:368] Setting JSON to false
	I1123 09:23:56.197168  332015 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7585,"bootTime":1763882251,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 09:23:56.197241  332015 start.go:143] virtualization:  
	I1123 09:23:56.202491  332015 out.go:179] * [ha-857095] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:23:56.205469  332015 notify.go:221] Checking for updates...
	I1123 09:23:56.205985  332015 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:23:56.209103  332015 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:23:56.212257  332015 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:23:56.214935  332015 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 09:23:56.217823  332015 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:23:56.220754  332015 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:23:56.224090  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:23:56.224192  332015 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:23:56.248091  332015 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:23:56.248221  332015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:23:56.316560  332015 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-23 09:23:56.306152339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:23:56.316667  332015 docker.go:319] overlay module found
	I1123 09:23:56.319905  332015 out.go:179] * Using the docker driver based on existing profile
	I1123 09:23:56.322883  332015 start.go:309] selected driver: docker
	I1123 09:23:56.322910  332015 start.go:927] validating driver "docker" against &{Name:ha-857095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:23:56.323070  332015 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:23:56.323169  332015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:23:56.383495  332015 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-23 09:23:56.374562034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:23:56.383895  332015 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:23:56.383914  332015 cni.go:84] Creating CNI manager for ""
	I1123 09:23:56.383965  332015 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1123 09:23:56.384008  332015 start.go:353] cluster config:
	{Name:ha-857095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:23:56.387318  332015 out.go:179] * Starting "ha-857095" primary control-plane node in "ha-857095" cluster
	I1123 09:23:56.390204  332015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:23:56.393222  332015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:23:56.395941  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:23:56.395987  332015 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 09:23:56.395997  332015 cache.go:65] Caching tarball of preloaded images
	I1123 09:23:56.396063  332015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:23:56.396081  332015 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:23:56.396092  332015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:23:56.396244  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:23:56.413619  332015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:23:56.413643  332015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:23:56.413663  332015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:23:56.413694  332015 start.go:360] acquireMachinesLock for ha-857095: {Name:mk7ea4c3d6888276233865fa5f92414123c08091 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:23:56.413754  332015 start.go:364] duration metric: took 36.201µs to acquireMachinesLock for "ha-857095"
	I1123 09:23:56.413778  332015 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:23:56.413787  332015 fix.go:54] fixHost starting: 
	I1123 09:23:56.414049  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:23:56.430596  332015 fix.go:112] recreateIfNeeded on ha-857095: state=Stopped err=<nil>
	W1123 09:23:56.430627  332015 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:23:56.433965  332015 out.go:252] * Restarting existing docker container for "ha-857095" ...
	I1123 09:23:56.434061  332015 cli_runner.go:164] Run: docker start ha-857095
	I1123 09:23:56.669371  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:23:56.694309  332015 kic.go:430] container "ha-857095" state is running.
	I1123 09:23:56.694718  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095
	I1123 09:23:56.714939  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:23:56.715179  332015 machine.go:94] provisionDockerMachine start ...
	I1123 09:23:56.715249  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:23:56.739434  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:56.739774  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33182 <nil> <nil>}
	I1123 09:23:56.739790  332015 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:23:56.740583  332015 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45372->127.0.0.1:33182: read: connection reset by peer
	I1123 09:23:59.888928  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095
	
	I1123 09:23:59.888954  332015 ubuntu.go:182] provisioning hostname "ha-857095"
	I1123 09:23:59.889018  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:23:59.906579  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:23:59.906895  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33182 <nil> <nil>}
	I1123 09:23:59.906906  332015 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-857095 && echo "ha-857095" | sudo tee /etc/hostname
	I1123 09:24:00.143191  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095
	
	I1123 09:24:00.143304  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:00.200109  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:00.200444  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33182 <nil> <nil>}
	I1123 09:24:00.200460  332015 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857095/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:24:00.391079  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:24:00.391118  332015 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:24:00.391140  332015 ubuntu.go:190] setting up certificates
	I1123 09:24:00.391151  332015 provision.go:84] configureAuth start
	I1123 09:24:00.391221  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095
	I1123 09:24:00.416269  332015 provision.go:143] copyHostCerts
	I1123 09:24:00.416328  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:24:00.416373  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:24:00.416396  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:24:00.416502  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:24:00.416616  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:24:00.416643  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:24:00.416649  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:24:00.416685  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:24:00.416740  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:24:00.416764  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:24:00.416769  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:24:00.416796  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:24:00.416852  332015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.ha-857095 san=[127.0.0.1 192.168.49.2 ha-857095 localhost minikube]
	I1123 09:24:00.654716  332015 provision.go:177] copyRemoteCerts
	I1123 09:24:00.654793  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:24:00.654834  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:00.677057  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:00.781001  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1123 09:24:00.781107  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:24:00.798881  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1123 09:24:00.798961  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1123 09:24:00.816589  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1123 09:24:00.816669  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:24:00.834536  332015 provision.go:87] duration metric: took 443.371132ms to configureAuth
	I1123 09:24:00.834605  332015 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:24:00.834885  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:00.835007  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:00.852135  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:00.852465  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33182 <nil> <nil>}
	I1123 09:24:00.852484  332015 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:24:01.230722  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:24:01.230745  332015 machine.go:97] duration metric: took 4.515545369s to provisionDockerMachine
	I1123 09:24:01.230757  332015 start.go:293] postStartSetup for "ha-857095" (driver="docker")
	I1123 09:24:01.230784  332015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:24:01.230849  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:24:01.230895  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:01.255652  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:01.361493  332015 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:24:01.364819  332015 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:24:01.364849  332015 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:24:01.364861  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:24:01.364917  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:24:01.364992  332015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:24:01.365000  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /etc/ssl/certs/2849042.pem
	I1123 09:24:01.365102  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:24:01.373236  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:24:01.391175  332015 start.go:296] duration metric: took 160.402274ms for postStartSetup
	I1123 09:24:01.391305  332015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:24:01.391349  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:01.408403  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:01.514432  332015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:24:01.519130  332015 fix.go:56] duration metric: took 5.105336191s for fixHost
	I1123 09:24:01.519158  332015 start.go:83] releasing machines lock for "ha-857095", held for 5.105389919s
	I1123 09:24:01.519225  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095
	I1123 09:24:01.535905  332015 ssh_runner.go:195] Run: cat /version.json
	I1123 09:24:01.535965  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:01.536231  332015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:24:01.536282  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:24:01.562880  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:01.565249  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:24:01.665009  332015 ssh_runner.go:195] Run: systemctl --version
	I1123 09:24:01.757828  332015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:24:01.794910  332015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:24:01.799455  332015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:24:01.799605  332015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:24:01.807720  332015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:24:01.807746  332015 start.go:496] detecting cgroup driver to use...
	I1123 09:24:01.807800  332015 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:24:01.807878  332015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:24:01.822720  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:24:01.836248  332015 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:24:01.836404  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:24:01.853658  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:24:01.867264  332015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:24:01.974745  332015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:24:02.101306  332015 docker.go:234] disabling docker service ...
	I1123 09:24:02.101464  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:24:02.117932  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:24:02.131548  332015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:24:02.243604  332015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:24:02.362672  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:24:02.376516  332015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:24:02.391962  332015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:24:02.392048  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.400619  332015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:24:02.400698  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.410062  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.419774  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.429277  332015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:24:02.438031  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.447555  332015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.455833  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:02.464518  332015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:24:02.472029  332015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:24:02.479828  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:24:02.606510  332015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:24:02.773593  332015 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:24:02.773712  332015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:24:02.778273  332015 start.go:564] Will wait 60s for crictl version
	I1123 09:24:02.778386  332015 ssh_runner.go:195] Run: which crictl
	I1123 09:24:02.782031  332015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:24:02.805950  332015 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:24:02.806105  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:24:02.837219  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:24:02.868046  332015 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:24:02.870882  332015 cli_runner.go:164] Run: docker network inspect ha-857095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:24:02.888727  332015 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:24:02.893087  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:24:02.903114  332015 kubeadm.go:884] updating cluster {Name:ha-857095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:24:02.903266  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:24:02.903340  332015 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:24:02.938058  332015 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:24:02.938082  332015 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:24:02.938142  332015 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:24:02.965340  332015 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:24:02.965366  332015 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:24:02.965376  332015 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 09:24:02.965526  332015 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-857095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:24:02.965621  332015 ssh_runner.go:195] Run: crio config
	I1123 09:24:03.024329  332015 cni.go:84] Creating CNI manager for ""
	I1123 09:24:03.024405  332015 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1123 09:24:03.024439  332015 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:24:03.024493  332015 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-857095 NodeName:ha-857095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:24:03.024670  332015 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-857095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:24:03.024706  332015 kube-vip.go:115] generating kube-vip config ...
	I1123 09:24:03.024788  332015 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1123 09:24:03.037111  332015 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:24:03.037290  332015 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1123 09:24:03.037395  332015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:24:03.045237  332015 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:24:03.045328  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1123 09:24:03.053429  332015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1123 09:24:03.066204  332015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:24:03.078929  332015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1123 09:24:03.092229  332015 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1123 09:24:03.104792  332015 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1123 09:24:03.108474  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:24:03.118280  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:24:03.231167  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:24:03.246187  332015 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095 for IP: 192.168.49.2
	I1123 09:24:03.246257  332015 certs.go:195] generating shared ca certs ...
	I1123 09:24:03.246288  332015 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:03.246475  332015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:24:03.246549  332015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:24:03.246586  332015 certs.go:257] generating profile certs ...
	I1123 09:24:03.246711  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key
	I1123 09:24:03.246768  332015 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.fbc14aa1
	I1123 09:24:03.246799  332015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt.fbc14aa1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1123 09:24:03.300262  332015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt.fbc14aa1 ...
	I1123 09:24:03.300340  332015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt.fbc14aa1: {Name:mk96366c0e17998ceef956dc2b188d7321ecf01f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:03.300600  332015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.fbc14aa1 ...
	I1123 09:24:03.300633  332015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.fbc14aa1: {Name:mk3d8a4e6dd8546bed5a8d4ed49833bd7f302bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:03.300779  332015 certs.go:382] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt.fbc14aa1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt
	I1123 09:24:03.300944  332015 certs.go:386] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.fbc14aa1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key
	I1123 09:24:03.301074  332015 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key
	I1123 09:24:03.301086  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1123 09:24:03.301100  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1123 09:24:03.301112  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1123 09:24:03.301123  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1123 09:24:03.301134  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1123 09:24:03.301149  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1123 09:24:03.301161  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1123 09:24:03.301173  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1123 09:24:03.301228  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:24:03.301260  332015 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:24:03.301268  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:24:03.301296  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:24:03.301321  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:24:03.301343  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:24:03.301386  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:24:03.301443  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /usr/share/ca-certificates/2849042.pem
	I1123 09:24:03.301458  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:24:03.301469  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem -> /usr/share/ca-certificates/284904.pem
	I1123 09:24:03.302078  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:24:03.321449  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:24:03.344480  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:24:03.366943  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:24:03.388807  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:24:03.417016  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:24:03.441422  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:24:03.466546  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:24:03.486297  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:24:03.505713  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:24:03.523380  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:24:03.541787  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:24:03.553943  332015 ssh_runner.go:195] Run: openssl version
	I1123 09:24:03.560161  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:24:03.569902  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:24:03.574290  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:24:03.574428  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:24:03.615270  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:24:03.622991  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:24:03.631168  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:24:03.634814  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:24:03.634879  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:24:03.675522  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:24:03.683531  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:24:03.692221  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:24:03.695819  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:24:03.695881  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:24:03.736556  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:24:03.744109  332015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:24:03.747786  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:24:03.788604  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:24:03.830353  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:24:03.884091  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:24:03.938208  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:24:03.984397  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:24:04.045856  332015 kubeadm.go:401] StartCluster: {Name:ha-857095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:24:04.046023  332015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:24:04.046122  332015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:24:04.075084  332015 cri.go:89] found id: "3f803f0d2708c2458335864b38cbe1261399f59c726a34053cba0f4d0c4267e2"
	I1123 09:24:04.075156  332015 cri.go:89] found id: ""
	I1123 09:24:04.075240  332015 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:24:04.094693  332015 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:24:04Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:24:04.094840  332015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:24:04.120167  332015 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:24:04.120234  332015 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:24:04.120315  332015 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:24:04.133113  332015 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:24:04.133681  332015 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-857095" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:24:04.133848  332015 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "ha-857095" cluster setting kubeconfig missing "ha-857095" context setting]
	I1123 09:24:04.134501  332015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:04.135077  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:24:04.135715  332015 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 09:24:04.135788  332015 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 09:24:04.135810  332015 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 09:24:04.135854  332015 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 09:24:04.135880  332015 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 09:24:04.135767  332015 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1123 09:24:04.137370  332015 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:24:04.163446  332015 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1123 09:24:04.163519  332015 kubeadm.go:602] duration metric: took 43.265409ms to restartPrimaryControlPlane
	I1123 09:24:04.163544  332015 kubeadm.go:403] duration metric: took 117.700121ms to StartCluster
	I1123 09:24:04.163589  332015 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:04.163671  332015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:24:04.164252  332015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:24:04.164494  332015 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:24:04.164539  332015 start.go:242] waiting for startup goroutines ...
	I1123 09:24:04.164560  332015 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:24:04.165096  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:04.170570  332015 out.go:179] * Enabled addons: 
	I1123 09:24:04.174575  332015 addons.go:530] duration metric: took 10.006073ms for enable addons: enabled=[]
	I1123 09:24:04.174659  332015 start.go:247] waiting for cluster config update ...
	I1123 09:24:04.174681  332015 start.go:256] writing updated cluster config ...
	I1123 09:24:04.178275  332015 out.go:203] 
	I1123 09:24:04.181916  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:04.182093  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:24:04.188964  332015 out.go:179] * Starting "ha-857095-m02" control-plane node in "ha-857095" cluster
	I1123 09:24:04.192293  332015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:24:04.195633  332015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:24:04.198557  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:24:04.198653  332015 cache.go:65] Caching tarball of preloaded images
	I1123 09:24:04.198625  332015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:24:04.198998  332015 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:24:04.199035  332015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:24:04.199196  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:24:04.236750  332015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:24:04.236768  332015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:24:04.236781  332015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:24:04.236803  332015 start.go:360] acquireMachinesLock for ha-857095-m02: {Name:mk302f2371cf69337e911dfb76261e6364d80001 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:24:04.236853  332015 start.go:364] duration metric: took 36.242µs to acquireMachinesLock for "ha-857095-m02"
	I1123 09:24:04.236872  332015 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:24:04.236877  332015 fix.go:54] fixHost starting: m02
	I1123 09:24:04.237131  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m02 --format={{.State.Status}}
	I1123 09:24:04.264568  332015 fix.go:112] recreateIfNeeded on ha-857095-m02: state=Stopped err=<nil>
	W1123 09:24:04.264592  332015 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:24:04.268071  332015 out.go:252] * Restarting existing docker container for "ha-857095-m02" ...
	I1123 09:24:04.268150  332015 cli_runner.go:164] Run: docker start ha-857095-m02
	I1123 09:24:04.652204  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m02 --format={{.State.Status}}
	I1123 09:24:04.680714  332015 kic.go:430] container "ha-857095-m02" state is running.
	I1123 09:24:04.681090  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m02
	I1123 09:24:04.707062  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:24:04.707317  332015 machine.go:94] provisionDockerMachine start ...
	I1123 09:24:04.707387  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:04.741254  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:04.741586  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33187 <nil> <nil>}
	I1123 09:24:04.741597  332015 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:24:04.742229  332015 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34184->127.0.0.1:33187: read: connection reset by peer
	I1123 09:24:08.002494  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m02
	
	I1123 09:24:08.002568  332015 ubuntu.go:182] provisioning hostname "ha-857095-m02"
	I1123 09:24:08.002678  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:08.029049  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:08.029348  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33187 <nil> <nil>}
	I1123 09:24:08.029358  332015 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-857095-m02 && echo "ha-857095-m02" | sudo tee /etc/hostname
	I1123 09:24:08.253783  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m02
	
	I1123 09:24:08.253924  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:08.291114  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:08.291434  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33187 <nil> <nil>}
	I1123 09:24:08.291450  332015 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857095-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857095-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857095-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:24:08.491050  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:24:08.491119  332015 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:24:08.491152  332015 ubuntu.go:190] setting up certificates
	I1123 09:24:08.491194  332015 provision.go:84] configureAuth start
	I1123 09:24:08.491321  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m02
	I1123 09:24:08.526937  332015 provision.go:143] copyHostCerts
	I1123 09:24:08.526984  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:24:08.527020  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:24:08.527027  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:24:08.527102  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:24:08.527176  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:24:08.527192  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:24:08.527197  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:24:08.527222  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:24:08.527259  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:24:08.527274  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:24:08.527278  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:24:08.527300  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:24:08.527343  332015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.ha-857095-m02 san=[127.0.0.1 192.168.49.3 ha-857095-m02 localhost minikube]
	I1123 09:24:09.262765  332015 provision.go:177] copyRemoteCerts
	I1123 09:24:09.262880  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:24:09.262954  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:09.280151  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:09.397744  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1123 09:24:09.397799  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:24:09.444274  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1123 09:24:09.444335  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 09:24:09.474176  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1123 09:24:09.474229  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:24:09.500995  332015 provision.go:87] duration metric: took 1.009770735s to configureAuth
	I1123 09:24:09.501071  332015 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:24:09.501370  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:24:09.501570  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:09.541669  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:24:09.541983  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33187 <nil> <nil>}
	I1123 09:24:09.541997  332015 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:24:10.717183  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:24:10.717209  332015 machine.go:97] duration metric: took 6.009881771s to provisionDockerMachine
	I1123 09:24:10.717221  332015 start.go:293] postStartSetup for "ha-857095-m02" (driver="docker")
	I1123 09:24:10.717231  332015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:24:10.717289  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:24:10.717340  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:10.743261  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:10.873831  332015 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:24:10.882112  332015 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:24:10.882138  332015 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:24:10.882150  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:24:10.882203  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:24:10.882279  332015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:24:10.882286  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /etc/ssl/certs/2849042.pem
	I1123 09:24:10.882384  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:24:10.897705  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:24:10.928947  332015 start.go:296] duration metric: took 211.710763ms for postStartSetup
	I1123 09:24:10.929078  332015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:24:10.929161  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:10.965095  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:11.077996  332015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:24:11.083158  332015 fix.go:56] duration metric: took 6.846271288s for fixHost
	I1123 09:24:11.083241  332015 start.go:83] releasing machines lock for "ha-857095-m02", held for 6.846378251s
	I1123 09:24:11.083359  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m02
	I1123 09:24:11.142549  332015 out.go:179] * Found network options:
	I1123 09:24:11.145622  332015 out.go:179]   - NO_PROXY=192.168.49.2
	W1123 09:24:11.148481  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:24:11.148526  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	I1123 09:24:11.148594  332015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:24:11.148633  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:11.148887  332015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:24:11.148950  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m02
	I1123 09:24:11.179109  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:11.188793  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33187 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m02/id_rsa Username:docker}
	I1123 09:24:11.691645  332015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:24:11.715317  332015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:24:11.715396  332015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:24:11.749537  332015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:24:11.749567  332015 start.go:496] detecting cgroup driver to use...
	I1123 09:24:11.749599  332015 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:24:11.749652  332015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:24:11.790996  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:24:11.823649  332015 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:24:11.823714  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:24:11.850236  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:24:11.868366  332015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:24:12.144454  332015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:24:12.480952  332015 docker.go:234] disabling docker service ...
	I1123 09:24:12.481086  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:24:12.566871  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:24:12.598895  332015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:24:12.943816  332015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:24:13.198846  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:24:13.220755  332015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:24:13.238071  332015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:24:13.238185  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.246445  332015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:24:13.246513  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.254941  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.263305  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.271300  332015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:24:13.278821  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.288129  332015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.296195  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:24:13.304236  332015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:24:13.311253  332015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:24:13.318479  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:24:13.535394  332015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:25:43.810612  332015 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.275182672s)
	I1123 09:25:43.810639  332015 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:25:43.810701  332015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:25:43.814933  332015 start.go:564] Will wait 60s for crictl version
	I1123 09:25:43.814992  332015 ssh_runner.go:195] Run: which crictl
	I1123 09:25:43.818922  332015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:25:43.846107  332015 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:25:43.846200  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:25:43.877706  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:25:43.909681  332015 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:25:43.912736  332015 out.go:179]   - env NO_PROXY=192.168.49.2
	I1123 09:25:43.915738  332015 cli_runner.go:164] Run: docker network inspect ha-857095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:25:43.931587  332015 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:25:43.935281  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:25:43.944694  332015 mustload.go:66] Loading cluster: ha-857095
	I1123 09:25:43.944941  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:43.945204  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:25:43.962501  332015 host.go:66] Checking if "ha-857095" exists ...
	I1123 09:25:43.962775  332015 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095 for IP: 192.168.49.3
	I1123 09:25:43.962789  332015 certs.go:195] generating shared ca certs ...
	I1123 09:25:43.962805  332015 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:25:43.962924  332015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:25:43.962987  332015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:25:43.962999  332015 certs.go:257] generating profile certs ...
	I1123 09:25:43.963077  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key
	I1123 09:25:43.963146  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.66daad91
	I1123 09:25:43.963186  332015 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key
	I1123 09:25:43.963194  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1123 09:25:43.963206  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1123 09:25:43.963217  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1123 09:25:43.963237  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1123 09:25:43.963248  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1123 09:25:43.963258  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1123 09:25:43.963270  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1123 09:25:43.963281  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1123 09:25:43.963328  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:25:43.963357  332015 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:25:43.963369  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:25:43.963395  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:25:43.963419  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:25:43.963442  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:25:43.963488  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:25:43.963520  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem -> /usr/share/ca-certificates/284904.pem
	I1123 09:25:43.963531  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /usr/share/ca-certificates/2849042.pem
	I1123 09:25:43.963542  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:43.963592  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:25:43.980751  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:25:44.081825  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1123 09:25:44.085802  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1123 09:25:44.094194  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1123 09:25:44.097956  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1123 09:25:44.106256  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1123 09:25:44.110273  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1123 09:25:44.118652  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1123 09:25:44.122439  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1123 09:25:44.130532  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1123 09:25:44.133997  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1123 09:25:44.142041  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1123 09:25:44.145750  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1123 09:25:44.154268  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:25:44.174536  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:25:44.191976  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:25:44.210168  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:25:44.228737  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:25:44.246711  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:25:44.264397  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:25:44.282548  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:25:44.301229  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:25:44.321400  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:25:44.340621  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:25:44.360219  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1123 09:25:44.374691  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1123 09:25:44.388106  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1123 09:25:44.402723  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1123 09:25:44.416062  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1123 09:25:44.429635  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1123 09:25:44.443050  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1123 09:25:44.456835  332015 ssh_runner.go:195] Run: openssl version
	I1123 09:25:44.463525  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:25:44.472737  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:25:44.476731  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:25:44.476844  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:25:44.517979  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:25:44.525850  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:25:44.536453  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:25:44.542534  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:25:44.542604  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:25:44.599744  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:25:44.613671  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:25:44.626248  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:44.630279  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:44.630347  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:44.717285  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:25:44.727653  332015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:25:44.734478  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:25:44.781588  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:25:44.834781  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:25:44.900074  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:25:44.968766  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:25:45.046196  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:25:45.126791  332015 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1123 09:25:45.126936  332015 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-857095-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:25:45.126971  332015 kube-vip.go:115] generating kube-vip config ...
	I1123 09:25:45.127039  332015 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1123 09:25:45.160018  332015 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:25:45.160101  332015 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1123 09:25:45.160193  332015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:25:45.182092  332015 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:25:45.182333  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1123 09:25:45.194768  332015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 09:25:45.221310  332015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:25:45.267324  332015 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1123 09:25:45.295755  332015 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1123 09:25:45.299778  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:25:45.311639  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:25:45.546785  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:25:45.562952  332015 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:25:45.563297  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:45.567237  332015 out.go:179] * Verifying Kubernetes components...
	I1123 09:25:45.570200  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:25:45.791204  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:25:45.805221  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1123 09:25:45.805300  332015 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1123 09:25:45.805568  332015 node_ready.go:35] waiting up to 6m0s for node "ha-857095-m02" to be "Ready" ...
	I1123 09:25:46.975594  332015 node_ready.go:49] node "ha-857095-m02" is "Ready"
	I1123 09:25:46.975708  332015 node_ready.go:38] duration metric: took 1.170108444s for node "ha-857095-m02" to be "Ready" ...
	I1123 09:25:46.975722  332015 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:25:46.979095  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:25:47.015790  332015 api_server.go:72] duration metric: took 1.452452994s to wait for apiserver process to appear ...
	I1123 09:25:47.015827  332015 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:25:47.015848  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:47.055731  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:25:47.055771  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:25:47.516044  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:47.524553  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:47.524596  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:48.015961  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:48.027139  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:48.027189  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:48.516751  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:48.530354  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:48.530386  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:49.015933  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:49.026181  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:49.026224  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:49.516868  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:49.544816  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:25:49.544849  332015 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:25:50.015977  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:25:50.031576  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 09:25:50.034154  332015 api_server.go:141] control plane version: v1.34.1
	I1123 09:25:50.034191  332015 api_server.go:131] duration metric: took 3.018357536s to wait for apiserver health ...
	I1123 09:25:50.034201  332015 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:25:50.133527  332015 system_pods.go:59] 26 kube-system pods found
	I1123 09:25:50.133574  332015 system_pods.go:61] "coredns-66bc5c9577-gqskt" [9ec3e73a-4033-41ae-927a-50584a3e9653] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:25:50.133583  332015 system_pods.go:61] "coredns-66bc5c9577-kqvhl" [bcbbf58b-9d2d-4a51-b4c1-bfec16447df5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:25:50.133590  332015 system_pods.go:61] "etcd-ha-857095" [3eaffe71-9ce6-4a9b-8530-1de6a4ec8773] Running
	I1123 09:25:50.133596  332015 system_pods.go:61] "etcd-ha-857095-m02" [5f8628c9-5725-4ca9-9622-b42a9b63c833] Running
	I1123 09:25:50.133600  332015 system_pods.go:61] "etcd-ha-857095-m03" [2ec71863-ebd8-45ca-9f19-707503671154] Running
	I1123 09:25:50.133603  332015 system_pods.go:61] "kindnet-8bs9t" [d9dee210-2075-4095-8540-c13c401e5a68] Running
	I1123 09:25:50.133607  332015 system_pods.go:61] "kindnet-ls8hm" [b7c7ef9d-ebdd-4bd4-97e6-595b84787117] Running
	I1123 09:25:50.133622  332015 system_pods.go:61] "kindnet-r7p2c" [a4f419f5-ecbc-48e6-8f98-732c4ac5a977] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:25:50.133636  332015 system_pods.go:61] "kindnet-v5cch" [4bfed9c2-b321-43a0-a18b-c867696cf4cb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:25:50.133642  332015 system_pods.go:61] "kube-apiserver-ha-857095" [697606bd-c111-4922-adda-6902a7f40915] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:25:50.133647  332015 system_pods.go:61] "kube-apiserver-ha-857095-m02" [8516bae2-f830-4a82-aa30-dbd7bf657b52] Running
	I1123 09:25:50.133659  332015 system_pods.go:61] "kube-apiserver-ha-857095-m03" [9f6f5d7d-9bba-4b26-b928-05119bbc98af] Running
	I1123 09:25:50.133666  332015 system_pods.go:61] "kube-controller-manager-ha-857095" [026d1873-0078-4c87-a9c1-b5a615844bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:25:50.133671  332015 system_pods.go:61] "kube-controller-manager-ha-857095-m02" [51f4d1ee-3b47-49f2-907e-68598e7d88e1] Running
	I1123 09:25:50.133694  332015 system_pods.go:61] "kube-controller-manager-ha-857095-m03" [234e7d83-1430-4ee4-91e4-73bf5e7221dc] Running
	I1123 09:25:50.133700  332015 system_pods.go:61] "kube-proxy-275zc" [b46e4648-46c6-4f04-85bc-bbfd4aedc821] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:25:50.133704  332015 system_pods.go:61] "kube-proxy-6k46z" [f2387038-f806-4417-961a-cf4390f4b4a5] Running
	I1123 09:25:50.133712  332015 system_pods.go:61] "kube-proxy-9qgbr" [a03beba1-4074-45e0-a3a0-a4cf0917b9a8] Running
	I1123 09:25:50.133715  332015 system_pods.go:61] "kube-proxy-lqqmc" [81a61d2b-bb1b-46d7-9acc-035150e8061b] Running
	I1123 09:25:50.133721  332015 system_pods.go:61] "kube-scheduler-ha-857095" [0598722f-31ac-4529-8b00-94c9bccf8255] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:25:50.133732  332015 system_pods.go:61] "kube-scheduler-ha-857095-m02" [0d16a804-69c1-47f1-b32c-3b35f950765f] Running
	I1123 09:25:50.133737  332015 system_pods.go:61] "kube-scheduler-ha-857095-m03" [aaf4d61f-0ec3-4e06-912a-a87fc3ab3cdb] Running
	I1123 09:25:50.133741  332015 system_pods.go:61] "kube-vip-ha-857095" [41b5690c-90a6-4557-9e9c-fcb76fe0c548] Running
	I1123 09:25:50.133753  332015 system_pods.go:61] "kube-vip-ha-857095-m02" [9c7a58ce-d823-401a-9695-36a0b87ab3ca] Running
	I1123 09:25:50.133757  332015 system_pods.go:61] "kube-vip-ha-857095-m03" [3830c657-5386-4214-a319-d42e19a40c12] Running
	I1123 09:25:50.133761  332015 system_pods.go:61] "storage-provisioner" [fd6347d8-5602-4a34-875b-811bc8ea2bc2] Running
	I1123 09:25:50.133772  332015 system_pods.go:74] duration metric: took 99.565974ms to wait for pod list to return data ...
	I1123 09:25:50.133785  332015 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:25:50.235662  332015 default_sa.go:45] found service account: "default"
	I1123 09:25:50.235698  332015 default_sa.go:55] duration metric: took 101.906307ms for default service account to be created ...
	I1123 09:25:50.235710  332015 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:25:50.276224  332015 system_pods.go:86] 26 kube-system pods found
	I1123 09:25:50.276258  332015 system_pods.go:89] "coredns-66bc5c9577-gqskt" [9ec3e73a-4033-41ae-927a-50584a3e9653] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:25:50.276269  332015 system_pods.go:89] "coredns-66bc5c9577-kqvhl" [bcbbf58b-9d2d-4a51-b4c1-bfec16447df5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:25:50.276284  332015 system_pods.go:89] "etcd-ha-857095" [3eaffe71-9ce6-4a9b-8530-1de6a4ec8773] Running
	I1123 09:25:50.276290  332015 system_pods.go:89] "etcd-ha-857095-m02" [5f8628c9-5725-4ca9-9622-b42a9b63c833] Running
	I1123 09:25:50.276295  332015 system_pods.go:89] "etcd-ha-857095-m03" [2ec71863-ebd8-45ca-9f19-707503671154] Running
	I1123 09:25:50.276300  332015 system_pods.go:89] "kindnet-8bs9t" [d9dee210-2075-4095-8540-c13c401e5a68] Running
	I1123 09:25:50.276308  332015 system_pods.go:89] "kindnet-ls8hm" [b7c7ef9d-ebdd-4bd4-97e6-595b84787117] Running
	I1123 09:25:50.276314  332015 system_pods.go:89] "kindnet-r7p2c" [a4f419f5-ecbc-48e6-8f98-732c4ac5a977] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:25:50.276328  332015 system_pods.go:89] "kindnet-v5cch" [4bfed9c2-b321-43a0-a18b-c867696cf4cb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:25:50.276336  332015 system_pods.go:89] "kube-apiserver-ha-857095" [697606bd-c111-4922-adda-6902a7f40915] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:25:50.276345  332015 system_pods.go:89] "kube-apiserver-ha-857095-m02" [8516bae2-f830-4a82-aa30-dbd7bf657b52] Running
	I1123 09:25:50.276356  332015 system_pods.go:89] "kube-apiserver-ha-857095-m03" [9f6f5d7d-9bba-4b26-b928-05119bbc98af] Running
	I1123 09:25:50.276368  332015 system_pods.go:89] "kube-controller-manager-ha-857095" [026d1873-0078-4c87-a9c1-b5a615844bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:25:50.276374  332015 system_pods.go:89] "kube-controller-manager-ha-857095-m02" [51f4d1ee-3b47-49f2-907e-68598e7d88e1] Running
	I1123 09:25:50.276389  332015 system_pods.go:89] "kube-controller-manager-ha-857095-m03" [234e7d83-1430-4ee4-91e4-73bf5e7221dc] Running
	I1123 09:25:50.276395  332015 system_pods.go:89] "kube-proxy-275zc" [b46e4648-46c6-4f04-85bc-bbfd4aedc821] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:25:50.276399  332015 system_pods.go:89] "kube-proxy-6k46z" [f2387038-f806-4417-961a-cf4390f4b4a5] Running
	I1123 09:25:50.276405  332015 system_pods.go:89] "kube-proxy-9qgbr" [a03beba1-4074-45e0-a3a0-a4cf0917b9a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:25:50.276409  332015 system_pods.go:89] "kube-proxy-lqqmc" [81a61d2b-bb1b-46d7-9acc-035150e8061b] Running
	I1123 09:25:50.276418  332015 system_pods.go:89] "kube-scheduler-ha-857095" [0598722f-31ac-4529-8b00-94c9bccf8255] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:25:50.276439  332015 system_pods.go:89] "kube-scheduler-ha-857095-m02" [0d16a804-69c1-47f1-b32c-3b35f950765f] Running
	I1123 09:25:50.276443  332015 system_pods.go:89] "kube-scheduler-ha-857095-m03" [aaf4d61f-0ec3-4e06-912a-a87fc3ab3cdb] Running
	I1123 09:25:50.276448  332015 system_pods.go:89] "kube-vip-ha-857095" [41b5690c-90a6-4557-9e9c-fcb76fe0c548] Running
	I1123 09:25:50.276452  332015 system_pods.go:89] "kube-vip-ha-857095-m02" [9c7a58ce-d823-401a-9695-36a0b87ab3ca] Running
	I1123 09:25:50.276459  332015 system_pods.go:89] "kube-vip-ha-857095-m03" [3830c657-5386-4214-a319-d42e19a40c12] Running
	I1123 09:25:50.276463  332015 system_pods.go:89] "storage-provisioner" [fd6347d8-5602-4a34-875b-811bc8ea2bc2] Running
	I1123 09:25:50.276469  332015 system_pods.go:126] duration metric: took 40.753939ms to wait for k8s-apps to be running ...
	I1123 09:25:50.276477  332015 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:25:50.276538  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:25:50.296938  332015 system_svc.go:56] duration metric: took 20.452092ms WaitForService to wait for kubelet
	I1123 09:25:50.296975  332015 kubeadm.go:587] duration metric: took 4.73364502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:25:50.296993  332015 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:25:50.317399  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:25:50.317469  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:25:50.317482  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:25:50.317487  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:25:50.317491  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:25:50.317495  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:25:50.317499  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:25:50.317511  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:25:50.317521  332015 node_conditions.go:105] duration metric: took 20.520835ms to run NodePressure ...
	I1123 09:25:50.317534  332015 start.go:242] waiting for startup goroutines ...
	I1123 09:25:50.317564  332015 start.go:256] writing updated cluster config ...
	I1123 09:25:50.323143  332015 out.go:203] 
	I1123 09:25:50.326401  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:50.326524  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:25:50.330156  332015 out.go:179] * Starting "ha-857095-m03" control-plane node in "ha-857095" cluster
	I1123 09:25:50.334438  332015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:25:50.338097  332015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:25:50.340607  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:25:50.340654  332015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:25:50.340838  332015 cache.go:65] Caching tarball of preloaded images
	I1123 09:25:50.340926  332015 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:25:50.340940  332015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:25:50.341072  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:25:50.370726  332015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:25:50.370752  332015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:25:50.370766  332015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:25:50.370789  332015 start.go:360] acquireMachinesLock for ha-857095-m03: {Name:mk6acf38570d035eb912e1d2f030641425a2af59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:25:50.370845  332015 start.go:364] duration metric: took 36.226µs to acquireMachinesLock for "ha-857095-m03"
	I1123 09:25:50.370869  332015 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:25:50.370875  332015 fix.go:54] fixHost starting: m03
	I1123 09:25:50.371144  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m03 --format={{.State.Status}}
	I1123 09:25:50.400510  332015 fix.go:112] recreateIfNeeded on ha-857095-m03: state=Stopped err=<nil>
	W1123 09:25:50.400540  332015 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:25:50.404410  332015 out.go:252] * Restarting existing docker container for "ha-857095-m03" ...
	I1123 09:25:50.404500  332015 cli_runner.go:164] Run: docker start ha-857095-m03
	I1123 09:25:50.796227  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m03 --format={{.State.Status}}
	I1123 09:25:50.840417  332015 kic.go:430] container "ha-857095-m03" state is running.
	I1123 09:25:50.840758  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m03
	I1123 09:25:50.894166  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:25:50.894416  332015 machine.go:94] provisionDockerMachine start ...
	I1123 09:25:50.894479  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:50.924984  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:25:50.925293  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1123 09:25:50.925301  332015 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:25:50.926098  332015 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 09:25:54.161789  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m03
	
	I1123 09:25:54.161877  332015 ubuntu.go:182] provisioning hostname "ha-857095-m03"
	I1123 09:25:54.161974  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:54.189870  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:25:54.190176  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1123 09:25:54.190186  332015 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-857095-m03 && echo "ha-857095-m03" | sudo tee /etc/hostname
	I1123 09:25:54.416867  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m03
	
	I1123 09:25:54.416961  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:54.451607  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:25:54.451922  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1123 09:25:54.451938  332015 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857095-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857095-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857095-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:25:54.684288  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:25:54.684344  332015 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:25:54.684362  332015 ubuntu.go:190] setting up certificates
	I1123 09:25:54.684372  332015 provision.go:84] configureAuth start
	I1123 09:25:54.684450  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m03
	I1123 09:25:54.708108  332015 provision.go:143] copyHostCerts
	I1123 09:25:54.708151  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:25:54.708186  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:25:54.708192  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:25:54.708273  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:25:54.708351  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:25:54.708368  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:25:54.708373  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:25:54.708399  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:25:54.708439  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:25:54.708455  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:25:54.708459  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:25:54.708484  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:25:54.708532  332015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.ha-857095-m03 san=[127.0.0.1 192.168.49.4 ha-857095-m03 localhost minikube]
	I1123 09:25:54.877285  332015 provision.go:177] copyRemoteCerts
	I1123 09:25:54.877362  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:25:54.877428  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:54.897354  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:55.052011  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1123 09:25:55.052077  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 09:25:55.110347  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1123 09:25:55.110418  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:25:55.160630  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1123 09:25:55.160706  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:25:55.206791  332015 provision.go:87] duration metric: took 522.405111ms to configureAuth
	I1123 09:25:55.206859  332015 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:25:55.207143  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:55.207288  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:55.231475  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:25:55.231787  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1123 09:25:55.231807  332015 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:25:55.818269  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:25:55.818294  332015 machine.go:97] duration metric: took 4.923860996s to provisionDockerMachine
	I1123 09:25:55.818307  332015 start.go:293] postStartSetup for "ha-857095-m03" (driver="docker")
	I1123 09:25:55.818318  332015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:25:55.818419  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:25:55.818465  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:55.838899  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:55.945315  332015 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:25:55.948680  332015 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:25:55.948711  332015 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:25:55.948723  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:25:55.948779  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:25:55.948855  332015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:25:55.948865  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /etc/ssl/certs/2849042.pem
	I1123 09:25:55.948961  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:25:55.956253  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:25:55.975283  332015 start.go:296] duration metric: took 156.955332ms for postStartSetup
	I1123 09:25:55.975413  332015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:25:55.975489  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:55.995364  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:56.102831  332015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:25:56.108114  332015 fix.go:56] duration metric: took 5.737232288s for fixHost
	I1123 09:25:56.108138  332015 start.go:83] releasing machines lock for "ha-857095-m03", held for 5.737279936s
	I1123 09:25:56.108206  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m03
	I1123 09:25:56.129684  332015 out.go:179] * Found network options:
	I1123 09:25:56.132653  332015 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1123 09:25:56.138460  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:25:56.138495  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:25:56.138520  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:25:56.138534  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	I1123 09:25:56.138602  332015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:25:56.138645  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:56.138894  332015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:25:56.138945  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:25:56.160178  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:56.178028  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:25:56.510498  332015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:25:56.519235  332015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:25:56.519358  332015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:25:56.532899  332015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:25:56.532974  332015 start.go:496] detecting cgroup driver to use...
	I1123 09:25:56.533021  332015 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:25:56.533095  332015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:25:56.563353  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:25:56.582194  332015 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:25:56.582307  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:25:56.604304  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:25:56.624857  332015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:25:56.880123  332015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:25:57.130771  332015 docker.go:234] disabling docker service ...
	I1123 09:25:57.130892  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:25:57.155366  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:25:57.181953  332015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:25:57.470517  332015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:25:57.703602  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:25:57.722751  332015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:25:57.754960  332015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:25:57.755080  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.788981  332015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:25:57.789102  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.805042  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.815549  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.830253  332015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:25:57.840395  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.853329  332015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.867204  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:25:57.882910  332015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:25:57.895568  332015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:25:57.910955  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:25:58.202730  332015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:25:59.499296  332015 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.296482203s)
	I1123 09:25:59.499324  332015 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:25:59.499400  332015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:25:59.503386  332015 start.go:564] Will wait 60s for crictl version
	I1123 09:25:59.503504  332015 ssh_runner.go:195] Run: which crictl
	I1123 09:25:59.507281  332015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:25:59.537756  332015 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:25:59.537841  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:25:59.571202  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:25:59.604176  332015 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:25:59.607193  332015 out.go:179]   - env NO_PROXY=192.168.49.2
	I1123 09:25:59.610134  332015 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1123 09:25:59.613170  332015 cli_runner.go:164] Run: docker network inspect ha-857095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:25:59.630136  332015 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:25:59.634914  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:25:59.644730  332015 mustload.go:66] Loading cluster: ha-857095
	I1123 09:25:59.644972  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:25:59.645239  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:25:59.662875  332015 host.go:66] Checking if "ha-857095" exists ...
	I1123 09:25:59.663179  332015 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095 for IP: 192.168.49.4
	I1123 09:25:59.663188  332015 certs.go:195] generating shared ca certs ...
	I1123 09:25:59.663201  332015 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:25:59.663327  332015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:25:59.663365  332015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:25:59.663372  332015 certs.go:257] generating profile certs ...
	I1123 09:25:59.663446  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key
	I1123 09:25:59.663522  332015 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key.283ff493
	I1123 09:25:59.663567  332015 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key
	I1123 09:25:59.663575  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1123 09:25:59.663589  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1123 09:25:59.663601  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1123 09:25:59.663612  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1123 09:25:59.663621  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1123 09:25:59.663633  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1123 09:25:59.663644  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1123 09:25:59.663654  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1123 09:25:59.663702  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:25:59.663734  332015 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:25:59.663742  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:25:59.663771  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:25:59.663797  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:25:59.663820  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:25:59.663870  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:25:59.663898  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem -> /usr/share/ca-certificates/284904.pem
	I1123 09:25:59.663912  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /usr/share/ca-certificates/2849042.pem
	I1123 09:25:59.663923  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:25:59.663978  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:25:59.689941  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:25:59.793738  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1123 09:25:59.797235  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1123 09:25:59.805196  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1123 09:25:59.808653  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1123 09:25:59.816623  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1123 09:25:59.819984  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1123 09:25:59.828037  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1123 09:25:59.831812  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1123 09:25:59.839915  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1123 09:25:59.843477  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1123 09:25:59.851542  332015 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1123 09:25:59.855295  332015 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1123 09:25:59.863949  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:25:59.885646  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:25:59.904286  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:25:59.924769  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:25:59.944702  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:25:59.963610  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:25:59.984488  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:26:00.117342  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:26:00.182322  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:26:00.220393  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:26:00.303614  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:26:00.335892  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1123 09:26:00.355160  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1123 09:26:00.374206  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1123 09:26:00.392709  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1123 09:26:00.409109  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1123 09:26:00.425117  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1123 09:26:00.439914  332015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1123 09:26:00.464465  332015 ssh_runner.go:195] Run: openssl version
	I1123 09:26:00.472524  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:26:00.483656  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:26:00.487711  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:26:00.487827  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:26:00.532783  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:26:00.543887  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:26:00.551979  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:26:00.555635  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:26:00.555720  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:26:00.597611  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:26:00.605512  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:26:00.613913  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:00.617669  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:00.617766  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:00.660921  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:26:00.669960  332015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:26:00.674647  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:26:00.723335  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:26:00.764258  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:26:00.804912  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:26:00.845808  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:26:00.888833  332015 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:26:00.931554  332015 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1123 09:26:00.931679  332015 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-857095-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:26:00.931715  332015 kube-vip.go:115] generating kube-vip config ...
	I1123 09:26:00.931766  332015 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1123 09:26:00.944231  332015 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:26:00.944300  332015 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1123 09:26:00.944366  332015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:26:00.952127  332015 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:26:00.952218  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1123 09:26:00.959898  332015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 09:26:00.974683  332015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:26:00.988424  332015 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1123 09:26:01.007528  332015 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1123 09:26:01.011388  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:26:01.021832  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:01.167574  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:26:01.186465  332015 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:26:01.187024  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:01.191851  332015 out.go:179] * Verifying Kubernetes components...
	I1123 09:26:01.194848  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:01.336348  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:26:01.352032  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1123 09:26:01.352169  332015 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1123 09:26:01.352449  332015 node_ready.go:35] waiting up to 6m0s for node "ha-857095-m03" to be "Ready" ...
	I1123 09:26:01.355787  332015 node_ready.go:49] node "ha-857095-m03" is "Ready"
	I1123 09:26:01.355816  332015 node_ready.go:38] duration metric: took 3.32939ms for node "ha-857095-m03" to be "Ready" ...
	I1123 09:26:01.355830  332015 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:26:01.355885  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:01.856392  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:02.356689  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:02.856504  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:03.356575  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:03.856101  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:04.356803  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:04.856202  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:05.356951  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:05.856542  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:06.356037  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:06.856518  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:07.356012  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:07.856915  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:08.356635  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:08.856266  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:09.356016  332015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:26:09.375020  332015 api_server.go:72] duration metric: took 8.188500317s to wait for apiserver process to appear ...
	I1123 09:26:09.375044  332015 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:26:09.375064  332015 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 09:26:09.384535  332015 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 09:26:09.386418  332015 api_server.go:141] control plane version: v1.34.1
	I1123 09:26:09.386440  332015 api_server.go:131] duration metric: took 11.388651ms to wait for apiserver health ...
	I1123 09:26:09.386448  332015 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:26:09.406325  332015 system_pods.go:59] 26 kube-system pods found
	I1123 09:26:09.407759  332015 system_pods.go:61] "coredns-66bc5c9577-gqskt" [9ec3e73a-4033-41ae-927a-50584a3e9653] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:26:09.407805  332015 system_pods.go:61] "coredns-66bc5c9577-kqvhl" [bcbbf58b-9d2d-4a51-b4c1-bfec16447df5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:26:09.407831  332015 system_pods.go:61] "etcd-ha-857095" [3eaffe71-9ce6-4a9b-8530-1de6a4ec8773] Running
	I1123 09:26:09.407852  332015 system_pods.go:61] "etcd-ha-857095-m02" [5f8628c9-5725-4ca9-9622-b42a9b63c833] Running
	I1123 09:26:09.407873  332015 system_pods.go:61] "etcd-ha-857095-m03" [2ec71863-ebd8-45ca-9f19-707503671154] Running
	I1123 09:26:09.407906  332015 system_pods.go:61] "kindnet-8bs9t" [d9dee210-2075-4095-8540-c13c401e5a68] Running
	I1123 09:26:09.407931  332015 system_pods.go:61] "kindnet-ls8hm" [b7c7ef9d-ebdd-4bd4-97e6-595b84787117] Running
	I1123 09:26:09.407950  332015 system_pods.go:61] "kindnet-r7p2c" [a4f419f5-ecbc-48e6-8f98-732c4ac5a977] Running
	I1123 09:26:09.407971  332015 system_pods.go:61] "kindnet-v5cch" [4bfed9c2-b321-43a0-a18b-c867696cf4cb] Running
	I1123 09:26:09.407992  332015 system_pods.go:61] "kube-apiserver-ha-857095" [697606bd-c111-4922-adda-6902a7f40915] Running
	I1123 09:26:09.408020  332015 system_pods.go:61] "kube-apiserver-ha-857095-m02" [8516bae2-f830-4a82-aa30-dbd7bf657b52] Running
	I1123 09:26:09.408046  332015 system_pods.go:61] "kube-apiserver-ha-857095-m03" [9f6f5d7d-9bba-4b26-b928-05119bbc98af] Running
	I1123 09:26:09.408073  332015 system_pods.go:61] "kube-controller-manager-ha-857095" [026d1873-0078-4c87-a9c1-b5a615844bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:26:09.408095  332015 system_pods.go:61] "kube-controller-manager-ha-857095-m02" [51f4d1ee-3b47-49f2-907e-68598e7d88e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:26:09.408128  332015 system_pods.go:61] "kube-controller-manager-ha-857095-m03" [234e7d83-1430-4ee4-91e4-73bf5e7221dc] Running
	I1123 09:26:09.408158  332015 system_pods.go:61] "kube-proxy-275zc" [b46e4648-46c6-4f04-85bc-bbfd4aedc821] Running
	I1123 09:26:09.408180  332015 system_pods.go:61] "kube-proxy-6k46z" [f2387038-f806-4417-961a-cf4390f4b4a5] Running
	I1123 09:26:09.408201  332015 system_pods.go:61] "kube-proxy-9qgbr" [a03beba1-4074-45e0-a3a0-a4cf0917b9a8] Running
	I1123 09:26:09.408237  332015 system_pods.go:61] "kube-proxy-lqqmc" [81a61d2b-bb1b-46d7-9acc-035150e8061b] Running
	I1123 09:26:09.408274  332015 system_pods.go:61] "kube-scheduler-ha-857095" [0598722f-31ac-4529-8b00-94c9bccf8255] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:26:09.408299  332015 system_pods.go:61] "kube-scheduler-ha-857095-m02" [0d16a804-69c1-47f1-b32c-3b35f950765f] Running
	I1123 09:26:09.408319  332015 system_pods.go:61] "kube-scheduler-ha-857095-m03" [aaf4d61f-0ec3-4e06-912a-a87fc3ab3cdb] Running
	I1123 09:26:09.408352  332015 system_pods.go:61] "kube-vip-ha-857095" [41b5690c-90a6-4557-9e9c-fcb76fe0c548] Running
	I1123 09:26:09.408380  332015 system_pods.go:61] "kube-vip-ha-857095-m02" [9c7a58ce-d823-401a-9695-36a0b87ab3ca] Running
	I1123 09:26:09.408432  332015 system_pods.go:61] "kube-vip-ha-857095-m03" [3830c657-5386-4214-a319-d42e19a40c12] Running
	I1123 09:26:09.408457  332015 system_pods.go:61] "storage-provisioner" [fd6347d8-5602-4a34-875b-811bc8ea2bc2] Running
	I1123 09:26:09.408480  332015 system_pods.go:74] duration metric: took 22.024671ms to wait for pod list to return data ...
	I1123 09:26:09.408503  332015 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:26:09.412561  332015 default_sa.go:45] found service account: "default"
	I1123 09:26:09.412632  332015 default_sa.go:55] duration metric: took 4.107335ms for default service account to be created ...
	I1123 09:26:09.412660  332015 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:26:09.420811  332015 system_pods.go:86] 26 kube-system pods found
	I1123 09:26:09.420908  332015 system_pods.go:89] "coredns-66bc5c9577-gqskt" [9ec3e73a-4033-41ae-927a-50584a3e9653] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:26:09.420935  332015 system_pods.go:89] "coredns-66bc5c9577-kqvhl" [bcbbf58b-9d2d-4a51-b4c1-bfec16447df5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:26:09.420976  332015 system_pods.go:89] "etcd-ha-857095" [3eaffe71-9ce6-4a9b-8530-1de6a4ec8773] Running
	I1123 09:26:09.421010  332015 system_pods.go:89] "etcd-ha-857095-m02" [5f8628c9-5725-4ca9-9622-b42a9b63c833] Running
	I1123 09:26:09.421033  332015 system_pods.go:89] "etcd-ha-857095-m03" [2ec71863-ebd8-45ca-9f19-707503671154] Running
	I1123 09:26:09.421055  332015 system_pods.go:89] "kindnet-8bs9t" [d9dee210-2075-4095-8540-c13c401e5a68] Running
	I1123 09:26:09.421089  332015 system_pods.go:89] "kindnet-ls8hm" [b7c7ef9d-ebdd-4bd4-97e6-595b84787117] Running
	I1123 09:26:09.421118  332015 system_pods.go:89] "kindnet-r7p2c" [a4f419f5-ecbc-48e6-8f98-732c4ac5a977] Running
	I1123 09:26:09.421158  332015 system_pods.go:89] "kindnet-v5cch" [4bfed9c2-b321-43a0-a18b-c867696cf4cb] Running
	I1123 09:26:09.421187  332015 system_pods.go:89] "kube-apiserver-ha-857095" [697606bd-c111-4922-adda-6902a7f40915] Running
	I1123 09:26:09.421211  332015 system_pods.go:89] "kube-apiserver-ha-857095-m02" [8516bae2-f830-4a82-aa30-dbd7bf657b52] Running
	I1123 09:26:09.421233  332015 system_pods.go:89] "kube-apiserver-ha-857095-m03" [9f6f5d7d-9bba-4b26-b928-05119bbc98af] Running
	I1123 09:26:09.421274  332015 system_pods.go:89] "kube-controller-manager-ha-857095" [026d1873-0078-4c87-a9c1-b5a615844bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:26:09.421303  332015 system_pods.go:89] "kube-controller-manager-ha-857095-m02" [51f4d1ee-3b47-49f2-907e-68598e7d88e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:26:09.421325  332015 system_pods.go:89] "kube-controller-manager-ha-857095-m03" [234e7d83-1430-4ee4-91e4-73bf5e7221dc] Running
	I1123 09:26:09.421348  332015 system_pods.go:89] "kube-proxy-275zc" [b46e4648-46c6-4f04-85bc-bbfd4aedc821] Running
	I1123 09:26:09.421385  332015 system_pods.go:89] "kube-proxy-6k46z" [f2387038-f806-4417-961a-cf4390f4b4a5] Running
	I1123 09:26:09.421421  332015 system_pods.go:89] "kube-proxy-9qgbr" [a03beba1-4074-45e0-a3a0-a4cf0917b9a8] Running
	I1123 09:26:09.421441  332015 system_pods.go:89] "kube-proxy-lqqmc" [81a61d2b-bb1b-46d7-9acc-035150e8061b] Running
	I1123 09:26:09.421463  332015 system_pods.go:89] "kube-scheduler-ha-857095" [0598722f-31ac-4529-8b00-94c9bccf8255] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:26:09.421494  332015 system_pods.go:89] "kube-scheduler-ha-857095-m02" [0d16a804-69c1-47f1-b32c-3b35f950765f] Running
	I1123 09:26:09.421521  332015 system_pods.go:89] "kube-scheduler-ha-857095-m03" [aaf4d61f-0ec3-4e06-912a-a87fc3ab3cdb] Running
	I1123 09:26:09.421541  332015 system_pods.go:89] "kube-vip-ha-857095" [41b5690c-90a6-4557-9e9c-fcb76fe0c548] Running
	I1123 09:26:09.421562  332015 system_pods.go:89] "kube-vip-ha-857095-m02" [9c7a58ce-d823-401a-9695-36a0b87ab3ca] Running
	I1123 09:26:09.421595  332015 system_pods.go:89] "kube-vip-ha-857095-m03" [3830c657-5386-4214-a319-d42e19a40c12] Running
	I1123 09:26:09.421621  332015 system_pods.go:89] "storage-provisioner" [fd6347d8-5602-4a34-875b-811bc8ea2bc2] Running
	I1123 09:26:09.421644  332015 system_pods.go:126] duration metric: took 8.958012ms to wait for k8s-apps to be running ...
	I1123 09:26:09.421666  332015 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:26:09.421753  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:26:09.436477  332015 system_svc.go:56] duration metric: took 14.802398ms WaitForService to wait for kubelet
	I1123 09:26:09.436515  332015 kubeadm.go:587] duration metric: took 8.250000324s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:26:09.436534  332015 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:26:09.440490  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:09.440519  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:09.440532  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:09.440537  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:09.440549  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:09.440555  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:09.440563  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:09.440568  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:09.440578  332015 node_conditions.go:105] duration metric: took 4.039042ms to run NodePressure ...
	I1123 09:26:09.440592  332015 start.go:242] waiting for startup goroutines ...
	I1123 09:26:09.440627  332015 start.go:256] writing updated cluster config ...
	I1123 09:26:09.444444  332015 out.go:203] 
	I1123 09:26:09.447845  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:09.447976  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:26:09.451331  332015 out.go:179] * Starting "ha-857095-m04" worker node in "ha-857095" cluster
	I1123 09:26:09.454181  332015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:26:09.457128  332015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:26:09.459981  332015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:26:09.460044  332015 cache.go:65] Caching tarball of preloaded images
	I1123 09:26:09.460053  332015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:26:09.460162  332015 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:26:09.460183  332015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:26:09.460319  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:26:09.487056  332015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:26:09.487075  332015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:26:09.487099  332015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:26:09.487126  332015 start.go:360] acquireMachinesLock for ha-857095-m04: {Name:mkc778064e426bc743bab6e8fad34bbaae40e782 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:26:09.487176  332015 start.go:364] duration metric: took 35.471µs to acquireMachinesLock for "ha-857095-m04"
	I1123 09:26:09.487195  332015 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:26:09.487200  332015 fix.go:54] fixHost starting: m04
	I1123 09:26:09.487451  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m04 --format={{.State.Status}}
	I1123 09:26:09.507899  332015 fix.go:112] recreateIfNeeded on ha-857095-m04: state=Stopped err=<nil>
	W1123 09:26:09.507924  332015 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:26:09.511107  332015 out.go:252] * Restarting existing docker container for "ha-857095-m04" ...
	I1123 09:26:09.511253  332015 cli_runner.go:164] Run: docker start ha-857095-m04
	I1123 09:26:09.866032  332015 cli_runner.go:164] Run: docker container inspect ha-857095-m04 --format={{.State.Status}}
	I1123 09:26:09.896315  332015 kic.go:430] container "ha-857095-m04" state is running.
	I1123 09:26:09.896669  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m04
	I1123 09:26:09.920856  332015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/config.json ...
	I1123 09:26:09.921084  332015 machine.go:94] provisionDockerMachine start ...
	I1123 09:26:09.921148  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:09.953275  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:26:09.953746  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1123 09:26:09.953766  332015 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:26:09.954414  332015 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 09:26:13.177535  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m04
	
	I1123 09:26:13.177568  332015 ubuntu.go:182] provisioning hostname "ha-857095-m04"
	I1123 09:26:13.177640  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:13.208850  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:26:13.209159  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1123 09:26:13.209176  332015 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-857095-m04 && echo "ha-857095-m04" | sudo tee /etc/hostname
	I1123 09:26:13.425765  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-857095-m04
	
	I1123 09:26:13.425859  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:13.460720  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:26:13.461034  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1123 09:26:13.461061  332015 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-857095-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-857095-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-857095-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:26:13.666205  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:26:13.666234  332015 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:26:13.666252  332015 ubuntu.go:190] setting up certificates
	I1123 09:26:13.666263  332015 provision.go:84] configureAuth start
	I1123 09:26:13.666323  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m04
	I1123 09:26:13.699046  332015 provision.go:143] copyHostCerts
	I1123 09:26:13.699100  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:26:13.699136  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:26:13.699149  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:26:13.699242  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:26:13.699332  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:26:13.699356  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:26:13.699365  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:26:13.699394  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:26:13.699443  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:26:13.699466  332015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:26:13.699475  332015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:26:13.699504  332015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:26:13.699558  332015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.ha-857095-m04 san=[127.0.0.1 192.168.49.5 ha-857095-m04 localhost minikube]
	I1123 09:26:13.947128  332015 provision.go:177] copyRemoteCerts
	I1123 09:26:13.947199  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:26:13.947245  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:13.964666  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:14.108546  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1123 09:26:14.108614  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:26:14.147222  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1123 09:26:14.147298  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:26:14.174245  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1123 09:26:14.174323  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:26:14.202367  332015 provision.go:87] duration metric: took 536.081268ms to configureAuth
	I1123 09:26:14.202398  332015 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:26:14.202692  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:14.202823  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:14.228826  332015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:26:14.229151  332015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1123 09:26:14.229165  332015 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:26:14.698077  332015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:26:14.698147  332015 machine.go:97] duration metric: took 4.777046451s to provisionDockerMachine
	I1123 09:26:14.698176  332015 start.go:293] postStartSetup for "ha-857095-m04" (driver="docker")
	I1123 09:26:14.698221  332015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:26:14.698305  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:26:14.698371  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:14.723686  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:14.851030  332015 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:26:14.858337  332015 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:26:14.858362  332015 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:26:14.858374  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:26:14.858433  332015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:26:14.858508  332015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:26:14.858515  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /etc/ssl/certs/2849042.pem
	I1123 09:26:14.858611  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:26:14.870806  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:26:14.904225  332015 start.go:296] duration metric: took 206.013245ms for postStartSetup
	I1123 09:26:14.904312  332015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:26:14.904357  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:14.925549  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:15.048457  332015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:26:15.064072  332015 fix.go:56] duration metric: took 5.57686319s for fixHost
	I1123 09:26:15.064101  332015 start.go:83] releasing machines lock for "ha-857095-m04", held for 5.576912749s
	I1123 09:26:15.064189  332015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m04
	I1123 09:26:15.099935  332015 out.go:179] * Found network options:
	I1123 09:26:15.102733  332015 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1123 09:26:15.105537  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105581  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105592  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105615  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105625  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	W1123 09:26:15.105635  332015 proxy.go:120] fail to check proxy env: Error ip not in block
	I1123 09:26:15.105709  332015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:26:15.105751  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:15.106052  332015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:26:15.106106  332015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:26:15.139318  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:15.143260  332015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:26:15.438462  332015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:26:15.444861  332015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:26:15.444936  332015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:26:15.465823  332015 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:26:15.465847  332015 start.go:496] detecting cgroup driver to use...
	I1123 09:26:15.465876  332015 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:26:15.465925  332015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:26:15.496588  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:26:15.514577  332015 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:26:15.514673  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:26:15.534950  332015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:26:15.548709  332015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:26:15.754867  332015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:26:15.954809  332015 docker.go:234] disabling docker service ...
	I1123 09:26:15.954903  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:26:15.979986  332015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:26:15.995201  332015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:26:16.195305  332015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:26:16.373235  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:26:16.389735  332015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:26:16.410006  332015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:26:16.410174  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.419483  332015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:26:16.419592  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.428394  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.444114  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.463213  332015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:26:16.471981  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.480994  332015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.489302  332015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:26:16.498210  332015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:26:16.508001  332015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:26:16.516953  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:16.726052  332015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:26:16.986187  332015 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:26:16.986301  332015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:26:16.994949  332015 start.go:564] Will wait 60s for crictl version
	I1123 09:26:16.995057  332015 ssh_runner.go:195] Run: which crictl
	I1123 09:26:17.005848  332015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:26:17.068139  332015 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:26:17.068261  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:26:17.123372  332015 ssh_runner.go:195] Run: crio --version
	I1123 09:26:17.173210  332015 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:26:17.176207  332015 out.go:179]   - env NO_PROXY=192.168.49.2
	I1123 09:26:17.179404  332015 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1123 09:26:17.182767  332015 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1123 09:26:17.185787  332015 cli_runner.go:164] Run: docker network inspect ha-857095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:26:17.204073  332015 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 09:26:17.207997  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:26:17.218002  332015 mustload.go:66] Loading cluster: ha-857095
	I1123 09:26:17.218249  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:17.218496  332015 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:26:17.246745  332015 host.go:66] Checking if "ha-857095" exists ...
	I1123 09:26:17.247017  332015 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095 for IP: 192.168.49.5
	I1123 09:26:17.247024  332015 certs.go:195] generating shared ca certs ...
	I1123 09:26:17.247040  332015 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:26:17.247177  332015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:26:17.247217  332015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:26:17.247228  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1123 09:26:17.247241  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1123 09:26:17.247254  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1123 09:26:17.247265  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1123 09:26:17.247315  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:26:17.247346  332015 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:26:17.247354  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:26:17.247382  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:26:17.247406  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:26:17.247429  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:26:17.247473  332015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:26:17.247504  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /usr/share/ca-certificates/2849042.pem
	I1123 09:26:17.247517  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:17.247527  332015 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem -> /usr/share/ca-certificates/284904.pem
	I1123 09:26:17.247544  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:26:17.302193  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:26:17.327160  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:26:17.353974  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:26:17.377204  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:26:17.403460  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:26:17.423323  332015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:26:17.448832  332015 ssh_runner.go:195] Run: openssl version
	I1123 09:26:17.456781  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:26:17.467249  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:26:17.472303  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:26:17.472418  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:26:17.523101  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:26:17.535534  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:26:17.546862  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:17.552603  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:17.552699  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:26:17.599146  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:26:17.610235  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:26:17.618699  332015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:26:17.623313  332015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:26:17.623432  332015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:26:17.676492  332015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:26:17.685680  332015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:26:17.690257  332015 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:26:17.690334  332015 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1123 09:26:17.690451  332015 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-857095-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-857095 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:26:17.690571  332015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:26:17.699579  332015 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:26:17.699678  332015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1123 09:26:17.711806  332015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 09:26:17.726908  332015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:26:17.741366  332015 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1123 09:26:17.745929  332015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:26:17.756453  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:17.960408  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:26:17.989357  332015 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1123 09:26:17.989946  332015 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:26:17.994531  332015 out.go:179] * Verifying Kubernetes components...
	I1123 09:26:17.998123  332015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:26:18.239793  332015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:26:18.262774  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1123 09:26:18.262843  332015 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1123 09:26:18.263099  332015 node_ready.go:35] waiting up to 6m0s for node "ha-857095-m04" to be "Ready" ...
	I1123 09:26:18.269812  332015 node_ready.go:49] node "ha-857095-m04" is "Ready"
	I1123 09:26:18.269839  332015 node_ready.go:38] duration metric: took 6.727383ms for node "ha-857095-m04" to be "Ready" ...
	I1123 09:26:18.269854  332015 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:26:18.269907  332015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:26:18.288660  332015 system_svc.go:56] duration metric: took 18.797608ms WaitForService to wait for kubelet
	I1123 09:26:18.288686  332015 kubeadm.go:587] duration metric: took 299.282478ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:26:18.288702  332015 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:26:18.292995  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:18.293021  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:18.293032  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:18.293037  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:18.293042  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:18.293046  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:18.293051  332015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:26:18.293055  332015 node_conditions.go:123] node cpu capacity is 2
	I1123 09:26:18.293059  332015 node_conditions.go:105] duration metric: took 4.352482ms to run NodePressure ...
	I1123 09:26:18.293072  332015 start.go:242] waiting for startup goroutines ...
	I1123 09:26:18.293094  332015 start.go:256] writing updated cluster config ...
	I1123 09:26:18.293459  332015 ssh_runner.go:195] Run: rm -f paused
	I1123 09:26:18.297614  332015 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:26:18.298096  332015 kapi.go:59] client config for ha-857095: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/ha-857095/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:26:18.325623  332015 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gqskt" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:26:20.334064  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:22.832313  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:24.834199  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:27.335305  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:29.831965  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	W1123 09:26:31.861015  332015 pod_ready.go:104] pod "coredns-66bc5c9577-gqskt" is not "Ready", error: <nil>
	I1123 09:26:32.333037  332015 pod_ready.go:94] pod "coredns-66bc5c9577-gqskt" is "Ready"
	I1123 09:26:32.333066  332015 pod_ready.go:86] duration metric: took 14.007410196s for pod "coredns-66bc5c9577-gqskt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.333077  332015 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kqvhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.338930  332015 pod_ready.go:94] pod "coredns-66bc5c9577-kqvhl" is "Ready"
	I1123 09:26:32.338959  332015 pod_ready.go:86] duration metric: took 5.876773ms for pod "coredns-66bc5c9577-kqvhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.342889  332015 pod_ready.go:83] waiting for pod "etcd-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.354954  332015 pod_ready.go:94] pod "etcd-ha-857095" is "Ready"
	I1123 09:26:32.354982  332015 pod_ready.go:86] duration metric: took 12.06568ms for pod "etcd-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.354992  332015 pod_ready.go:83] waiting for pod "etcd-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.360908  332015 pod_ready.go:94] pod "etcd-ha-857095-m02" is "Ready"
	I1123 09:26:32.360988  332015 pod_ready.go:86] duration metric: took 5.989209ms for pod "etcd-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.361006  332015 pod_ready.go:83] waiting for pod "etcd-ha-857095-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.527237  332015 request.go:683] "Waited before sending request" delay="163.188719ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095-m03"
	I1123 09:26:32.531141  332015 pod_ready.go:94] pod "etcd-ha-857095-m03" is "Ready"
	I1123 09:26:32.531176  332015 pod_ready.go:86] duration metric: took 170.163678ms for pod "etcd-ha-857095-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.727633  332015 request.go:683] "Waited before sending request" delay="196.333255ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1123 09:26:32.731295  332015 pod_ready.go:83] waiting for pod "kube-apiserver-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:32.927721  332015 request.go:683] "Waited before sending request" delay="196.318551ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857095"
	I1123 09:26:33.127610  332015 request.go:683] "Waited before sending request" delay="196.351881ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	I1123 09:26:33.131377  332015 pod_ready.go:94] pod "kube-apiserver-ha-857095" is "Ready"
	I1123 09:26:33.131404  332015 pod_ready.go:86] duration metric: took 400.08428ms for pod "kube-apiserver-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:33.131415  332015 pod_ready.go:83] waiting for pod "kube-apiserver-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:33.326734  332015 request.go:683] "Waited before sending request" delay="195.246384ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857095-m02"
	I1123 09:26:33.527259  332015 request.go:683] "Waited before sending request" delay="197.325627ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095-m02"
	I1123 09:26:33.531408  332015 pod_ready.go:94] pod "kube-apiserver-ha-857095-m02" is "Ready"
	I1123 09:26:33.531476  332015 pod_ready.go:86] duration metric: took 400.053592ms for pod "kube-apiserver-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:33.531510  332015 pod_ready.go:83] waiting for pod "kube-apiserver-ha-857095-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:33.726854  332015 request.go:683] "Waited before sending request" delay="195.24293ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-857095-m03"
	I1123 09:26:33.927056  332015 request.go:683] "Waited before sending request" delay="196.304447ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095-m03"
	I1123 09:26:33.930670  332015 pod_ready.go:94] pod "kube-apiserver-ha-857095-m03" is "Ready"
	I1123 09:26:33.930738  332015 pod_ready.go:86] duration metric: took 399.207142ms for pod "kube-apiserver-ha-857095-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:34.127173  332015 request.go:683] "Waited before sending request" delay="196.311848ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1123 09:26:34.131888  332015 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:34.327442  332015 request.go:683] "Waited before sending request" delay="195.421664ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857095"
	I1123 09:26:34.526909  332015 request.go:683] "Waited before sending request" delay="195.121754ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	I1123 09:26:34.727795  332015 request.go:683] "Waited before sending request" delay="95.293534ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-857095"
	I1123 09:26:34.926808  332015 request.go:683] "Waited before sending request" delay="192.288691ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	I1123 09:26:35.326671  332015 request.go:683] "Waited before sending request" delay="190.240931ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	I1123 09:26:35.727087  332015 request.go:683] "Waited before sending request" delay="90.213857ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-857095"
	W1123 09:26:36.147664  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:38.639668  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:41.138106  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:43.639146  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:46.140223  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:48.638331  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	W1123 09:26:50.639066  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095" is not "Ready", error: <nil>
	I1123 09:26:51.639670  332015 pod_ready.go:94] pod "kube-controller-manager-ha-857095" is "Ready"
	I1123 09:26:51.639700  332015 pod_ready.go:86] duration metric: took 17.507743609s for pod "kube-controller-manager-ha-857095" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:26:51.639710  332015 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:26:53.652573  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:26:56.146503  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:26:58.147735  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:00.225967  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:02.647589  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:04.647752  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:07.153585  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:09.646738  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:12.145665  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:14.146292  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:16.646315  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:18.649017  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:20.649200  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:23.146376  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:25.147713  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:27.646124  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:29.646694  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:32.147157  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:34.647065  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:37.145928  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:39.149680  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:41.646227  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:43.648098  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:46.145963  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:48.146438  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:50.147240  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:52.647369  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:55.146780  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:27:57.649707  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:00.227209  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:02.646807  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:04.646959  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:07.146296  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:09.646937  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:11.648675  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:14.146286  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:16.646924  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:18.651084  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:21.147312  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:23.646217  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:25.646310  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:27.646958  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:30.146762  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:32.647802  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:35.146446  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:37.147422  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:39.647209  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:42.147709  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:44.646580  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:47.146583  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:49.646857  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:51.647231  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:54.147109  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:56.646513  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:28:58.646743  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:00.647210  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:03.146363  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:05.146523  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:07.147002  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:09.647653  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:12.146246  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:14.146687  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:16.157442  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:18.649348  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:21.146242  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:23.146404  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:25.646842  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:27.647159  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:29.647890  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:32.147183  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:34.647714  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:37.146420  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:39.146792  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:41.646176  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:43.646530  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:46.147106  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:48.149876  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:50.646833  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:53.145934  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:55.147151  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:57.646423  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:29:59.646898  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:01.651276  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:04.146294  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:06.150790  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:08.648014  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:11.147652  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:13.646274  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	W1123 09:30:16.147137  332015 pod_ready.go:104] pod "kube-controller-manager-ha-857095-m02" is not "Ready", error: <nil>
	I1123 09:30:18.297999  332015 pod_ready.go:86] duration metric: took 3m26.658254957s for pod "kube-controller-manager-ha-857095-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:30:18.298033  332015 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1123 09:30:18.298048  332015 pod_ready.go:40] duration metric: took 4m0.000406947s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:30:18.301156  332015 out.go:203] 
	W1123 09:30:18.304209  332015 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1123 09:30:18.307045  332015 out.go:203] 
	
	
	==> CRI-O <==
	Nov 23 09:26:20 ha-857095 crio[665]: time="2025-11-23T09:26:20.862323332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:26:20 ha-857095 crio[665]: time="2025-11-23T09:26:20.881754859Z" level=info msg="Created container 1e1332977cad9649cc196ae764ff285705d33ea97901ac8989363521003e0c1c: kube-system/storage-provisioner/storage-provisioner" id=d5a8a1d3-4e58-4349-a0ad-0995b7140043 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:26:20 ha-857095 crio[665]: time="2025-11-23T09:26:20.883093786Z" level=info msg="Starting container: 1e1332977cad9649cc196ae764ff285705d33ea97901ac8989363521003e0c1c" id=712d93bb-15d4-4499-9419-0be2273a15bf name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:26:20 ha-857095 crio[665]: time="2025-11-23T09:26:20.886919608Z" level=info msg="Started container" PID=1464 containerID=1e1332977cad9649cc196ae764ff285705d33ea97901ac8989363521003e0c1c description=kube-system/storage-provisioner/storage-provisioner id=712d93bb-15d4-4499-9419-0be2273a15bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=89473e76d12005c3f55b49ecc42454c1ef67be9260b26ec4b676fd34debc0d80
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.460338709Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.479184228Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.479355167Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.479441683Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.492946689Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.492986977Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.493010509Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.520498311Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.520535941Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.520557766Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.531467208Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:26:30 ha-857095 crio[665]: time="2025-11-23T09:26:30.531504222Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.461905305Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4d71707e-340a-471c-a17c-392bda308647 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.463102323Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=cdc30ef2-5502-4f37-bc6a-387205d0372f name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.46432019Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-857095/kube-controller-manager" id=e9e73fad-72e8-426c-b874-4cc1bd49e392 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.464442185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.472179073Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.475537935Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.496575929Z" level=info msg="Created container 42babfae983262eb923a97314d7a3b093122d61af813a84a2bcf0956e5326956: kube-system/kube-controller-manager-ha-857095/kube-controller-manager" id=e9e73fad-72e8-426c-b874-4cc1bd49e392 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.497241413Z" level=info msg="Starting container: 42babfae983262eb923a97314d7a3b093122d61af813a84a2bcf0956e5326956" id=2a3e48b5-355d-4798-aaf7-7f6b61e0dc6c name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:26:33 ha-857095 crio[665]: time="2025-11-23T09:26:33.499360139Z" level=info msg="Started container" PID=1519 containerID=42babfae983262eb923a97314d7a3b093122d61af813a84a2bcf0956e5326956 description=kube-system/kube-controller-manager-ha-857095/kube-controller-manager id=2a3e48b5-355d-4798-aaf7-7f6b61e0dc6c name=/runtime.v1.RuntimeService/StartContainer sandboxID=59c02939558c0de2a773da5e9f43cad9b5fb72908c248e77f86ee19f370077a6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	42babfae98326       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   4 minutes ago       Running             kube-controller-manager   5                   59c02939558c0       kube-controller-manager-ha-857095   kube-system
	1e1332977cad9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Running             storage-provisioner       2                   89473e76d1200       storage-provisioner                 kube-system
	f05afc1b0445e       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   4 minutes ago       Running             busybox                   1                   4a02c866c3881       busybox-7b57f96db7-jr7sx            default
	f9fc6c6a40826       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   1                   37bcc6634aaea       coredns-66bc5c9577-kqvhl            kube-system
	87aec09c596b0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 minutes ago       Running             kube-proxy                1                   0282e268e1c22       kube-proxy-9qgbr                    kube-system
	d01764f14c48f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       1                   89473e76d1200       storage-provisioner                 kube-system
	6b76bdb0dc741       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   4 minutes ago       Running             coredns                   1                   c56d4acdc2234       coredns-66bc5c9577-gqskt            kube-system
	44a90d22da14b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 minutes ago       Running             kindnet-cni               1                   397455ea01fe1       kindnet-r7p2c                       kube-system
	0a33af9e8b2a4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   4 minutes ago       Exited              kube-controller-manager   4                   59c02939558c0       kube-controller-manager-ha-857095   kube-system
	20bdce066bf2b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   5 minutes ago       Running             kube-apiserver            2                   8ef118042f73c       kube-apiserver-ha-857095            kube-system
	87647aaa5cefc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            1                   8ef118042f73c       kube-apiserver-ha-857095            kube-system
	9e42b9253fb8b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   6 minutes ago       Running             etcd                      1                   6fc84b4ecc8df       etcd-ha-857095                      kube-system
	99df51d331941       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   6 minutes ago       Running             kube-vip                  0                   d5e7755420e7c       kube-vip-ha-857095                  kube-system
	ae37103ec6813       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   6 minutes ago       Running             kube-scheduler            1                   8a4c5a79b6a82       kube-scheduler-ha-857095            kube-system
	
	
	==> coredns [6b76bdb0dc741434ecf605ce04cd2bb3aa3ad5985dd29cb11b1af0d9172d8676] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56838 - 9966 "HINFO IN 284337624056944186.8766603808723713126. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.038775907s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f9fc6c6a4082694b13ca579cc6787e448aa81ab706e072c7930725c06097556b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40754 - 52654 "HINFO IN 1059978023450998029.7253782516717518684. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021959119s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-857095
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-857095
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=ha-857095
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_18_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:18:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857095
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:30:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:30:33 +0000   Sun, 23 Nov 2025 09:18:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:30:33 +0000   Sun, 23 Nov 2025 09:18:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:30:33 +0000   Sun, 23 Nov 2025 09:18:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:30:33 +0000   Sun, 23 Nov 2025 09:25:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-857095
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                eb10d252-491a-4fd2-89b0-513efb8fdf15
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jr7sx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m21s
	  kube-system                 coredns-66bc5c9577-gqskt             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 coredns-66bc5c9577-kqvhl             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-ha-857095                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-r7p2c                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-857095             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-857095    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9qgbr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-857095             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-857095                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m44s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-857095 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-857095 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-857095 status is now: NodeHasSufficientMemory
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node ha-857095 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node ha-857095 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node ha-857095 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           11m                    node-controller  Node ha-857095 event: Registered Node ha-857095 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-857095 event: Registered Node ha-857095 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-857095 status is now: NodeReady
	  Normal   RegisteredNode           9m46s                  node-controller  Node ha-857095 event: Registered Node ha-857095 in Controller
	  Normal   RegisteredNode           6m59s                  node-controller  Node ha-857095 event: Registered Node ha-857095 in Controller
	  Normal   NodeHasSufficientMemory  6m32s (x8 over 6m32s)  kubelet          Node ha-857095 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m32s (x8 over 6m32s)  kubelet          Node ha-857095 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m32s (x8 over 6m32s)  kubelet          Node ha-857095 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-857095 event: Registered Node ha-857095 in Controller
	  Normal   RegisteredNode           3m59s                  node-controller  Node ha-857095 event: Registered Node ha-857095 in Controller
	
	
	Name:               ha-857095-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-857095-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=ha-857095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_23T09_19_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:19:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857095-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:30:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:28:13 +0000   Sun, 23 Nov 2025 09:19:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:28:13 +0000   Sun, 23 Nov 2025 09:19:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:28:13 +0000   Sun, 23 Nov 2025 09:19:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:28:13 +0000   Sun, 23 Nov 2025 09:20:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-857095-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                086daa3d-fd9f-4e74-8f1b-3235f7c68f88
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-ltgrn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m21s
	  kube-system                 etcd-ha-857095-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-v5cch                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-857095-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-857095-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-275zc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-857095-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-857095-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m43s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   CIDRAssignmentFailed     11m                    cidrAllocator    Node ha-857095-m02 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           11m                    node-controller  Node ha-857095-m02 event: Registered Node ha-857095-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-857095-m02 event: Registered Node ha-857095-m02 in Controller
	  Normal   RegisteredNode           9m46s                  node-controller  Node ha-857095-m02 event: Registered Node ha-857095-m02 in Controller
	  Normal   NodeHasSufficientPID     7m36s (x8 over 7m36s)  kubelet          Node ha-857095-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m36s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m36s (x8 over 7m36s)  kubelet          Node ha-857095-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m36s (x8 over 7m36s)  kubelet          Node ha-857095-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           6m59s                  node-controller  Node ha-857095-m02 event: Registered Node ha-857095-m02 in Controller
	  Normal   NodeHasSufficientMemory  6m29s (x8 over 6m29s)  kubelet          Node ha-857095-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 6m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    6m29s (x8 over 6m29s)  kubelet          Node ha-857095-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m29s (x8 over 6m29s)  kubelet          Node ha-857095-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        5m29s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-857095-m02 event: Registered Node ha-857095-m02 in Controller
	  Normal   RegisteredNode           3m59s                  node-controller  Node ha-857095-m02 event: Registered Node ha-857095-m02 in Controller
	
	
	Name:               ha-857095-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-857095-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=ha-857095
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_23T09_21_39_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:21:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-857095-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:30:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:30:29 +0000   Sun, 23 Nov 2025 09:21:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:30:29 +0000   Sun, 23 Nov 2025 09:21:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:30:29 +0000   Sun, 23 Nov 2025 09:21:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:30:29 +0000   Sun, 23 Nov 2025 09:22:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-857095-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                b75a4a3a-17bf-4722-ac06-1e0fa9c0c524
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-9rhw7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 kindnet-ls8hm               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m57s
	  kube-system                 kube-proxy-lqqmc            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m54s                  kube-proxy       
	  Normal   Starting                 4m4s                   kube-proxy       
	  Normal   Starting                 8m57s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     8m57s (x3 over 8m57s)  kubelet          Node ha-857095-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    8m57s (x3 over 8m57s)  kubelet          Node ha-857095-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  8m57s (x3 over 8m57s)  kubelet          Node ha-857095-m04 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 8m57s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   CIDRAssignmentFailed     8m56s                  cidrAllocator    Node ha-857095-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           8m56s                  node-controller  Node ha-857095-m04 event: Registered Node ha-857095-m04 in Controller
	  Normal   RegisteredNode           8m54s                  node-controller  Node ha-857095-m04 event: Registered Node ha-857095-m04 in Controller
	  Normal   RegisteredNode           8m53s                  node-controller  Node ha-857095-m04 event: Registered Node ha-857095-m04 in Controller
	  Normal   NodeReady                8m15s                  kubelet          Node ha-857095-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m59s                  node-controller  Node ha-857095-m04 event: Registered Node ha-857095-m04 in Controller
	  Warning  CgroupV1                 4m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 4m24s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  4m21s (x8 over 4m24s)  kubelet          Node ha-857095-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m21s (x8 over 4m24s)  kubelet          Node ha-857095-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m21s (x8 over 4m24s)  kubelet          Node ha-857095-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-857095-m04 event: Registered Node ha-857095-m04 in Controller
	  Normal   RegisteredNode           3m59s                  node-controller  Node ha-857095-m04 event: Registered Node ha-857095-m04 in Controller
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015154] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.511595] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034200] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.753844] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.833249] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:37] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/22/fs': -2
	[Nov23 08:56] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 08:58] overlayfs: idmapped layers are currently not supported
	[  +0.083595] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov23 09:04] overlayfs: idmapped layers are currently not supported
	[ +53.074501] overlayfs: idmapped layers are currently not supported
	[Nov23 09:18] overlayfs: idmapped layers are currently not supported
	[Nov23 09:19] overlayfs: idmapped layers are currently not supported
	[Nov23 09:20] overlayfs: idmapped layers are currently not supported
	[Nov23 09:21] overlayfs: idmapped layers are currently not supported
	[Nov23 09:22] overlayfs: idmapped layers are currently not supported
	[Nov23 09:24] overlayfs: idmapped layers are currently not supported
	[  +2.761695] overlayfs: idmapped layers are currently not supported
	[Nov23 09:25] overlayfs: idmapped layers are currently not supported
	[Nov23 09:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9e42b9253fb8b630e7dc3c1bd90335205bd4e883a1a22f51d4cb68ee751bee2f] <==
	{"level":"info","ts":"2025-11-23T09:26:11.182065Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"41d977b14da551f4","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-23T09:26:11.182187Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:26:11.251471Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:26:11.252026Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"warn","ts":"2025-11-23T09:30:25.758526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:53692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:25.802691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:53694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:25.831526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:53708","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T09:30:25.861222Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 15744404423091500964)"}
	{"level":"info","ts":"2025-11-23T09:30:25.863377Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"41d977b14da551f4","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-23T09:30:25.863422Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"41d977b14da551f4"}
	{"level":"warn","ts":"2025-11-23T09:30:25.863481Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:30:25.863501Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"41d977b14da551f4"}
	{"level":"warn","ts":"2025-11-23T09:30:25.863538Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:30:25.863556Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:30:25.863674Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"warn","ts":"2025-11-23T09:30:25.863862Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4","error":"context canceled"}
	{"level":"warn","ts":"2025-11-23T09:30:25.863894Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"41d977b14da551f4","error":"failed to read 41d977b14da551f4 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-11-23T09:30:25.863923Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"warn","ts":"2025-11-23T09:30:25.864023Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4","error":"context canceled"}
	{"level":"info","ts":"2025-11-23T09:30:25.864040Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:30:25.864050Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:30:25.864060Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"41d977b14da551f4"}
	{"level":"info","ts":"2025-11-23T09:30:25.864105Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"41d977b14da551f4"}
	{"level":"warn","ts":"2025-11-23T09:30:25.884131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:35936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:30:25.884275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:35932","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:30:35 up  2:13,  0 user,  load average: 0.37, 1.07, 1.44
	Linux ha-857095 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [44a90d22da14ba2218ba4b094e5bf35de76a3687c704587dff1bde2ca21ded04] <==
	I1123 09:30:00.470661       1 main.go:324] Node ha-857095-m02 has CIDR [10.244.1.0/24] 
	I1123 09:30:00.470730       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1123 09:30:00.470735       1 main.go:324] Node ha-857095-m03 has CIDR [10.244.2.0/24] 
	I1123 09:30:10.458780       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:30:10.458815       1 main.go:301] handling current node
	I1123 09:30:10.458830       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1123 09:30:10.458835       1 main.go:324] Node ha-857095-m02 has CIDR [10.244.1.0/24] 
	I1123 09:30:10.458995       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1123 09:30:10.459007       1 main.go:324] Node ha-857095-m03 has CIDR [10.244.2.0/24] 
	I1123 09:30:10.459074       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1123 09:30:10.459085       1 main.go:324] Node ha-857095-m04 has CIDR [10.244.3.0/24] 
	I1123 09:30:20.463179       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:30:20.463218       1 main.go:301] handling current node
	I1123 09:30:20.463234       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1123 09:30:20.463240       1 main.go:324] Node ha-857095-m02 has CIDR [10.244.1.0/24] 
	I1123 09:30:20.463384       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1123 09:30:20.463402       1 main.go:324] Node ha-857095-m03 has CIDR [10.244.2.0/24] 
	I1123 09:30:20.463484       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1123 09:30:20.463495       1 main.go:324] Node ha-857095-m04 has CIDR [10.244.3.0/24] 
	I1123 09:30:30.458086       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 09:30:30.458233       1 main.go:301] handling current node
	I1123 09:30:30.458276       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1123 09:30:30.458310       1 main.go:324] Node ha-857095-m02 has CIDR [10.244.1.0/24] 
	I1123 09:30:30.458737       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1123 09:30:30.458771       1 main.go:324] Node ha-857095-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [20bdce066bf2bdfda4bff2f53735c6b970c68ead5b62cf3e3e86c4b95b160933] <==
	I1123 09:25:47.278866       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 09:25:47.278902       1 policy_source.go:240] refreshing policies
	I1123 09:25:47.287561       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:25:47.287661       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:25:47.303439       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 09:25:47.304398       1 aggregator.go:171] initial CRD sync complete...
	I1123 09:25:47.304420       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:25:47.304427       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:25:47.304433       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:25:47.306880       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:25:47.312368       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:25:47.315842       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:25:47.331215       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1123 09:25:47.341974       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1123 09:25:47.343413       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:25:47.349727       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:25:47.356476       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1123 09:25:47.360379       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1123 09:25:48.471838       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 09:25:48.681723       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 09:25:48.681779       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:25:49.536933       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1123 09:25:49.694752       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1123 09:30:22.668735       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:30:22.711353       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [87647aaa5cefc0905445d50290aa43a681b39b2952b4b76e62eebbf3bc28afa7] <==
	{"level":"warn","ts":"2025-11-23T09:25:06.061170Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a10780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.061187Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40009b7a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.061199Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a11a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.061214Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018883c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.061228Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a14b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.061240Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021af0e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.061254Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a143c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.064223Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001717c20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.064544Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a110e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.064838Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016b0780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.064930Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014a3a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065031Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40020b6f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065161Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400147da40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065226Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026a6960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065286Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400147d0e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065500Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014a30e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065635Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026a7860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065704Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018892c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065649Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400237a3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065768Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40021ae000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065840Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400237a3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065890Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40022f65a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065934Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400253ab40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-11-23T09:25:06.065896Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400237a3c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	F1123 09:25:12.514838       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1] <==
	I1123 09:25:37.948675       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:25:38.833382       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1123 09:25:38.833431       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:25:38.836291       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1123 09:25:38.837318       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1123 09:25:38.837465       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:25:38.837539       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1123 09:25:48.855602       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [42babfae983262eb923a97314d7a3b093122d61af813a84a2bcf0956e5326956] <==
	I1123 09:26:36.394028       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 09:26:36.398532       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:26:36.398603       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-857095-m04"
	I1123 09:26:36.399061       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:26:36.405978       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 09:26:36.410273       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:26:36.413789       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:26:36.414983       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:26:36.417314       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:26:36.426520       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:26:36.431782       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:26:36.431918       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:26:36.432025       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-857095"
	I1123 09:26:36.432076       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-857095-m02"
	I1123 09:26:36.432103       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-857095-m03"
	I1123 09:26:36.432194       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-857095-m04"
	I1123 09:26:36.432429       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 09:26:36.438901       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:26:36.439030       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:26:36.439208       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:26:36.440406       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:26:36.440861       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:26:36.448935       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:26:36.450716       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:30:28.327747       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-857095-m04"
	
	
	==> kube-proxy [87aec09c596b09d0dbca59c7079a492763b5c52c19573dc16282f6cb518a9e7e] <==
	I1123 09:25:50.879789       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:25:51.026823       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:25:51.134697       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:25:51.136508       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 09:25:51.136724       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:25:51.202997       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:25:51.203052       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:25:51.216346       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:25:51.216642       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:25:51.216660       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:25:51.218354       1 config.go:200] "Starting service config controller"
	I1123 09:25:51.218379       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:25:51.218397       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:25:51.218402       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:25:51.218413       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:25:51.218417       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:25:51.219083       1 config.go:309] "Starting node config controller"
	I1123 09:25:51.219104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:25:51.219111       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:25:51.318740       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:25:51.318794       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:25:51.318870       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ae37103ec68135e4d1b955e8ad30e29e8d9e94f916f7903941858b029829d4fa] <==
	E1123 09:24:56.752400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:24:58.592574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 09:24:58.667983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:24:59.382668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:25:18.637026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:25:19.825775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:25:20.082695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:25:20.832268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:25:24.289699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:25:24.449453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:25:25.570813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:25:26.094376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:25:27.437175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:25:27.792384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:25:29.554608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:25:33.363560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:25:37.619850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:25:37.960772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:25:38.292787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:25:39.668574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:25:40.525135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1123 09:25:47.766169       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1123 09:30:22.519928       1 schedule_one.go:975] "Scheduler cache AssumePod failed" err="pod 664c71ba-d1bf-43d2-bf63-f0b04754c921(default/busybox-7b57f96db7-9rhw7) is in the cache, so can't be assumed" pod="default/busybox-7b57f96db7-9rhw7"
	E1123 09:30:22.520060       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="pod 664c71ba-d1bf-43d2-bf63-f0b04754c921(default/busybox-7b57f96db7-9rhw7) is in the cache, so can't be assumed" logger="UnhandledError" pod="default/busybox-7b57f96db7-9rhw7"
	I1123 09:30:22.520105       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-9rhw7" node="ha-857095-m04"
	
	
	==> kubelet <==
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.487295     802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4f419f5-ecbc-48e6-8f98-732c4ac5a977-xtables-lock\") pod \"kindnet-r7p2c\" (UID: \"a4f419f5-ecbc-48e6-8f98-732c4ac5a977\") " pod="kube-system/kindnet-r7p2c"
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.543395     802 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-857095"
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.543435     802 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-857095"
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.576170     802 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.718394     802 scope.go:117] "RemoveContainer" containerID="53b6dc95eaa49c07b80a3c7bd2747da0109e5512392b5c622ebfb42a3ff35637"
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.719092     802 scope.go:117] "RemoveContainer" containerID="0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1"
	Nov 23 09:25:49 ha-857095 kubelet[802]: E1123 09:25:49.719314     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857095_kube-system(00c64a04113bd2caba88f2fd71957641)\"" pod="kube-system/kube-controller-manager-ha-857095" podUID="00c64a04113bd2caba88f2fd71957641"
	Nov 23 09:25:49 ha-857095 kubelet[802]: I1123 09:25:49.728466     802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-857095" podStartSLOduration=0.728448114 podStartE2EDuration="728.448114ms" podCreationTimestamp="2025-11-23 09:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:25:49.692474502 +0000 UTC m=+106.440478825" watchObservedRunningTime="2025-11-23 09:25:49.728448114 +0000 UTC m=+106.476452437"
	Nov 23 09:25:49 ha-857095 kubelet[802]: W1123 09:25:49.949907     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/crio-0282e268e1c225d6584a360fa3666cd3b05fe5e4ae10a25f2468beb3ffa25fbd WatchSource:0}: Error finding container 0282e268e1c225d6584a360fa3666cd3b05fe5e4ae10a25f2468beb3ffa25fbd: Status 404 returned error can't find the container with id 0282e268e1c225d6584a360fa3666cd3b05fe5e4ae10a25f2468beb3ffa25fbd
	Nov 23 09:25:49 ha-857095 kubelet[802]: W1123 09:25:49.970465     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/crio-37bcc6634aaea4d8960df76401e753994761c739db1ba0d3445df4971e6c8476 WatchSource:0}: Error finding container 37bcc6634aaea4d8960df76401e753994761c739db1ba0d3445df4971e6c8476: Status 404 returned error can't find the container with id 37bcc6634aaea4d8960df76401e753994761c739db1ba0d3445df4971e6c8476
	Nov 23 09:25:50 ha-857095 kubelet[802]: W1123 09:25:50.102777     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/crio-4a02c866c3881ec97506e3c268b9fe4e509c859dea6c3f78578fa9a6f040c9cc WatchSource:0}: Error finding container 4a02c866c3881ec97506e3c268b9fe4e509c859dea6c3f78578fa9a6f040c9cc: Status 404 returned error can't find the container with id 4a02c866c3881ec97506e3c268b9fe4e509c859dea6c3f78578fa9a6f040c9cc
	Nov 23 09:25:51 ha-857095 kubelet[802]: I1123 09:25:51.422866     802 scope.go:117] "RemoveContainer" containerID="0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1"
	Nov 23 09:25:51 ha-857095 kubelet[802]: E1123 09:25:51.423566     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857095_kube-system(00c64a04113bd2caba88f2fd71957641)\"" pod="kube-system/kube-controller-manager-ha-857095" podUID="00c64a04113bd2caba88f2fd71957641"
	Nov 23 09:25:51 ha-857095 kubelet[802]: I1123 09:25:51.764121     802 scope.go:117] "RemoveContainer" containerID="0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1"
	Nov 23 09:25:51 ha-857095 kubelet[802]: E1123 09:25:51.764277     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857095_kube-system(00c64a04113bd2caba88f2fd71957641)\"" pod="kube-system/kube-controller-manager-ha-857095" podUID="00c64a04113bd2caba88f2fd71957641"
	Nov 23 09:26:03 ha-857095 kubelet[802]: E1123 09:26:03.376321     802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f78c0f38108822697ec35fb515af63cbe74822d5919dc7de72b9b416923926\": container with ID starting with e1f78c0f38108822697ec35fb515af63cbe74822d5919dc7de72b9b416923926 not found: ID does not exist" containerID="e1f78c0f38108822697ec35fb515af63cbe74822d5919dc7de72b9b416923926"
	Nov 23 09:26:03 ha-857095 kubelet[802]: I1123 09:26:03.376378     802 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="e1f78c0f38108822697ec35fb515af63cbe74822d5919dc7de72b9b416923926" err="rpc error: code = NotFound desc = could not find container \"e1f78c0f38108822697ec35fb515af63cbe74822d5919dc7de72b9b416923926\": container with ID starting with e1f78c0f38108822697ec35fb515af63cbe74822d5919dc7de72b9b416923926 not found: ID does not exist"
	Nov 23 09:26:03 ha-857095 kubelet[802]: E1123 09:26:03.434387     802 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2828455af2abf3ed01ffea7b324458e4f00c51da375d485188a001929b1e774a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2828455af2abf3ed01ffea7b324458e4f00c51da375d485188a001929b1e774a/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-857095_00c64a04113bd2caba88f2fd71957641/kube-controller-manager/3.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-857095_00c64a04113bd2caba88f2fd71957641/kube-controller-manager/3.log: no such file or directory
	Nov 23 09:26:03 ha-857095 kubelet[802]: E1123 09:26:03.440783     802 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/048a676d17b8beaf191cb14c2716a86bde8600dcabf403282009e019ff371098/diff" to get inode usage: stat /var/lib/containers/storage/overlay/048a676d17b8beaf191cb14c2716a86bde8600dcabf403282009e019ff371098/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-857095_00c64a04113bd2caba88f2fd71957641/kube-controller-manager/2.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-857095_00c64a04113bd2caba88f2fd71957641/kube-controller-manager/2.log: no such file or directory
	Nov 23 09:26:05 ha-857095 kubelet[802]: I1123 09:26:05.461096     802 scope.go:117] "RemoveContainer" containerID="0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1"
	Nov 23 09:26:05 ha-857095 kubelet[802]: E1123 09:26:05.461342     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857095_kube-system(00c64a04113bd2caba88f2fd71957641)\"" pod="kube-system/kube-controller-manager-ha-857095" podUID="00c64a04113bd2caba88f2fd71957641"
	Nov 23 09:26:19 ha-857095 kubelet[802]: I1123 09:26:19.461244     802 scope.go:117] "RemoveContainer" containerID="0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1"
	Nov 23 09:26:19 ha-857095 kubelet[802]: E1123 09:26:19.462321     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-857095_kube-system(00c64a04113bd2caba88f2fd71957641)\"" pod="kube-system/kube-controller-manager-ha-857095" podUID="00c64a04113bd2caba88f2fd71957641"
	Nov 23 09:26:20 ha-857095 kubelet[802]: I1123 09:26:20.849244     802 scope.go:117] "RemoveContainer" containerID="d01764f14c48facfa6e2f2a116b511c2ae876c073a208e73e2fd13c40f370017"
	Nov 23 09:26:33 ha-857095 kubelet[802]: I1123 09:26:33.461191     802 scope.go:117] "RemoveContainer" containerID="0a33af9e8b2a42206b9242b60e1ac591916a754050f750d72ed69394370de6d1"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-857095 -n ha-857095
helpers_test.go:269: (dbg) Run:  kubectl --context ha-857095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.32s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-608547 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-608547 --output=json --user=testUser: exit status 80 (1.805333093s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7cdd6b41-8fef-4ebf-a0ce-3b6355f764cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-608547 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"016730d7-90a3-4cfc-96ee-f79e7efd8201","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-23T09:35:36Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"79ec09ce-3d82-4c59-815d-8e38933fe878","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-608547 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.81s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.95s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-608547 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-608547 --output=json --user=testUser: exit status 80 (1.953230904s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4d27138-59a8-4f41-95e7-3a820127e7e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-608547 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"985a142c-b4d8-4077-aa32-f25a893dc03d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-23T09:35:38Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"2aa45a8e-1a94-4b0e-8c39-b842af593f8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-608547 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.95s)

                                                
                                    
x
+
TestPause/serial/Pause (7.98s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-902289 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-902289 --alsologtostderr -v=5: exit status 80 (2.454825242s)

                                                
                                                
-- stdout --
	* Pausing node pause-902289 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:53:59.056335  429804 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:53:59.056465  429804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:53:59.056474  429804 out.go:374] Setting ErrFile to fd 2...
	I1123 09:53:59.056480  429804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:53:59.056724  429804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:53:59.057088  429804 out.go:368] Setting JSON to false
	I1123 09:53:59.057117  429804 mustload.go:66] Loading cluster: pause-902289
	I1123 09:53:59.057613  429804 config.go:182] Loaded profile config "pause-902289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:53:59.058082  429804 cli_runner.go:164] Run: docker container inspect pause-902289 --format={{.State.Status}}
	I1123 09:53:59.075603  429804 host.go:66] Checking if "pause-902289" exists ...
	I1123 09:53:59.075937  429804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:53:59.156385  429804 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 09:53:59.144564811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:53:59.157074  429804 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-902289 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 09:53:59.160376  429804 out.go:179] * Pausing node pause-902289 ... 
	I1123 09:53:59.164064  429804 host.go:66] Checking if "pause-902289" exists ...
	I1123 09:53:59.164571  429804 ssh_runner.go:195] Run: systemctl --version
	I1123 09:53:59.164640  429804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-902289
	I1123 09:53:59.182318  429804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33349 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/pause-902289/id_rsa Username:docker}
	I1123 09:53:59.296541  429804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:53:59.315712  429804 pause.go:52] kubelet running: true
	I1123 09:53:59.315825  429804 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:53:59.630213  429804 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:53:59.630361  429804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:53:59.755572  429804 cri.go:89] found id: "656a67f7696253ff07fee5935f113bf2aab9c31a82f76613d0a52bb745cf02e3"
	I1123 09:53:59.755611  429804 cri.go:89] found id: "e2c772377726bd8b3beea2efc62f66e9cfb85a568feb56ef5d6c49a797734800"
	I1123 09:53:59.755625  429804 cri.go:89] found id: "28a462431337106b621a0e2e0ebeda0f9205283900c74c993b5b8f1e0ab5751b"
	I1123 09:53:59.755629  429804 cri.go:89] found id: "891c5da2b2cf8e87bd8f24a275f47debf482222a675ab0960fad8dd9ee882ab2"
	I1123 09:53:59.755651  429804 cri.go:89] found id: "301fb617b1f960338a688814750032b12882ded15eac0506bfd49ddf0934870b"
	I1123 09:53:59.755663  429804 cri.go:89] found id: "924727f67067568042a015bc3e901ad4ad44c23a740962980e1d770157ccd349"
	I1123 09:53:59.755666  429804 cri.go:89] found id: "e6a3434aae7399365305f02fe70d5f6ea51d903da9bc3be6ddc186ca7434c593"
	I1123 09:53:59.755669  429804 cri.go:89] found id: "4bb5df37d1031824f5c4150f63585d202677be311760ed8886913f82f675b2d2"
	I1123 09:53:59.755672  429804 cri.go:89] found id: "a4f1980bab92b13afddf2474d8b4b5b8b53f0cd0a64c295106b26ae3db1103af"
	I1123 09:53:59.755678  429804 cri.go:89] found id: "21b7e1366f30e127ba37cbd9bc0a22fc7073ee77f7eb6a86efe280ea69f595b0"
	I1123 09:53:59.755688  429804 cri.go:89] found id: "b6aece6a157139e982fe1e6ec7e327f9f62fa96ac78ffcbccff18c993426e2a5"
	I1123 09:53:59.755691  429804 cri.go:89] found id: "00f2d513b8a4eeb15f37c571edaf256e1cc41499c119fb407d7ad17fb1c4e582"
	I1123 09:53:59.755707  429804 cri.go:89] found id: "9f818fc66635cc44eeb47e4207008d4a814ac58d7495df544d7c6550de4cfd40"
	I1123 09:53:59.755717  429804 cri.go:89] found id: "f9cfd21effcadf8269de4c91c08df2b43305336549c8f0bd07926f49473ef1dd"
	I1123 09:53:59.755721  429804 cri.go:89] found id: ""
	I1123 09:53:59.755794  429804 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:53:59.770370  429804 retry.go:31] will retry after 328.097124ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:53:59Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:54:00.098734  429804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:54:00.124863  429804 pause.go:52] kubelet running: false
	I1123 09:54:00.124935  429804 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:54:00.442400  429804 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:54:00.442495  429804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:54:00.596109  429804 cri.go:89] found id: "656a67f7696253ff07fee5935f113bf2aab9c31a82f76613d0a52bb745cf02e3"
	I1123 09:54:00.596127  429804 cri.go:89] found id: "e2c772377726bd8b3beea2efc62f66e9cfb85a568feb56ef5d6c49a797734800"
	I1123 09:54:00.596132  429804 cri.go:89] found id: "28a462431337106b621a0e2e0ebeda0f9205283900c74c993b5b8f1e0ab5751b"
	I1123 09:54:00.596136  429804 cri.go:89] found id: "891c5da2b2cf8e87bd8f24a275f47debf482222a675ab0960fad8dd9ee882ab2"
	I1123 09:54:00.596138  429804 cri.go:89] found id: "301fb617b1f960338a688814750032b12882ded15eac0506bfd49ddf0934870b"
	I1123 09:54:00.596142  429804 cri.go:89] found id: "924727f67067568042a015bc3e901ad4ad44c23a740962980e1d770157ccd349"
	I1123 09:54:00.596144  429804 cri.go:89] found id: "e6a3434aae7399365305f02fe70d5f6ea51d903da9bc3be6ddc186ca7434c593"
	I1123 09:54:00.596155  429804 cri.go:89] found id: "4bb5df37d1031824f5c4150f63585d202677be311760ed8886913f82f675b2d2"
	I1123 09:54:00.596158  429804 cri.go:89] found id: "a4f1980bab92b13afddf2474d8b4b5b8b53f0cd0a64c295106b26ae3db1103af"
	I1123 09:54:00.596168  429804 cri.go:89] found id: "21b7e1366f30e127ba37cbd9bc0a22fc7073ee77f7eb6a86efe280ea69f595b0"
	I1123 09:54:00.596171  429804 cri.go:89] found id: "b6aece6a157139e982fe1e6ec7e327f9f62fa96ac78ffcbccff18c993426e2a5"
	I1123 09:54:00.596182  429804 cri.go:89] found id: "00f2d513b8a4eeb15f37c571edaf256e1cc41499c119fb407d7ad17fb1c4e582"
	I1123 09:54:00.596185  429804 cri.go:89] found id: "9f818fc66635cc44eeb47e4207008d4a814ac58d7495df544d7c6550de4cfd40"
	I1123 09:54:00.596188  429804 cri.go:89] found id: "f9cfd21effcadf8269de4c91c08df2b43305336549c8f0bd07926f49473ef1dd"
	I1123 09:54:00.596195  429804 cri.go:89] found id: ""
	I1123 09:54:00.596251  429804 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:54:00.616687  429804 retry.go:31] will retry after 537.964644ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:54:00Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:54:01.155456  429804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:54:01.172662  429804 pause.go:52] kubelet running: false
	I1123 09:54:01.172739  429804 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:54:01.328028  429804 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:54:01.328117  429804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:54:01.411671  429804 cri.go:89] found id: "656a67f7696253ff07fee5935f113bf2aab9c31a82f76613d0a52bb745cf02e3"
	I1123 09:54:01.411696  429804 cri.go:89] found id: "e2c772377726bd8b3beea2efc62f66e9cfb85a568feb56ef5d6c49a797734800"
	I1123 09:54:01.411701  429804 cri.go:89] found id: "28a462431337106b621a0e2e0ebeda0f9205283900c74c993b5b8f1e0ab5751b"
	I1123 09:54:01.411705  429804 cri.go:89] found id: "891c5da2b2cf8e87bd8f24a275f47debf482222a675ab0960fad8dd9ee882ab2"
	I1123 09:54:01.411708  429804 cri.go:89] found id: "301fb617b1f960338a688814750032b12882ded15eac0506bfd49ddf0934870b"
	I1123 09:54:01.411712  429804 cri.go:89] found id: "924727f67067568042a015bc3e901ad4ad44c23a740962980e1d770157ccd349"
	I1123 09:54:01.411715  429804 cri.go:89] found id: "e6a3434aae7399365305f02fe70d5f6ea51d903da9bc3be6ddc186ca7434c593"
	I1123 09:54:01.411718  429804 cri.go:89] found id: "4bb5df37d1031824f5c4150f63585d202677be311760ed8886913f82f675b2d2"
	I1123 09:54:01.411721  429804 cri.go:89] found id: "a4f1980bab92b13afddf2474d8b4b5b8b53f0cd0a64c295106b26ae3db1103af"
	I1123 09:54:01.411728  429804 cri.go:89] found id: "21b7e1366f30e127ba37cbd9bc0a22fc7073ee77f7eb6a86efe280ea69f595b0"
	I1123 09:54:01.411732  429804 cri.go:89] found id: "b6aece6a157139e982fe1e6ec7e327f9f62fa96ac78ffcbccff18c993426e2a5"
	I1123 09:54:01.411735  429804 cri.go:89] found id: "00f2d513b8a4eeb15f37c571edaf256e1cc41499c119fb407d7ad17fb1c4e582"
	I1123 09:54:01.411738  429804 cri.go:89] found id: "9f818fc66635cc44eeb47e4207008d4a814ac58d7495df544d7c6550de4cfd40"
	I1123 09:54:01.411750  429804 cri.go:89] found id: "f9cfd21effcadf8269de4c91c08df2b43305336549c8f0bd07926f49473ef1dd"
	I1123 09:54:01.411753  429804 cri.go:89] found id: ""
	I1123 09:54:01.411801  429804 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:54:01.427845  429804 out.go:203] 
	W1123 09:54:01.430842  429804 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:54:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:54:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:54:01.430863  429804 out.go:285] * 
	* 
	W1123 09:54:01.437941  429804 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:54:01.440424  429804 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-902289 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-902289
helpers_test.go:243: (dbg) docker inspect pause-902289:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10",
	        "Created": "2025-11-23T09:52:09.251636502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 417492,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:52:09.336769907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10/hostname",
	        "HostsPath": "/var/lib/docker/containers/9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10/hosts",
	        "LogPath": "/var/lib/docker/containers/9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10/9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10-json.log",
	        "Name": "/pause-902289",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-902289:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-902289",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10",
	                "LowerDir": "/var/lib/docker/overlay2/cb22273ab66a6b5ac51dfa8fc1dfb117cbc7c2e11c9338f41068244f7cab27ef-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb22273ab66a6b5ac51dfa8fc1dfb117cbc7c2e11c9338f41068244f7cab27ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb22273ab66a6b5ac51dfa8fc1dfb117cbc7c2e11c9338f41068244f7cab27ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb22273ab66a6b5ac51dfa8fc1dfb117cbc7c2e11c9338f41068244f7cab27ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-902289",
	                "Source": "/var/lib/docker/volumes/pause-902289/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-902289",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-902289",
	                "name.minikube.sigs.k8s.io": "pause-902289",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "64d7250f44a672bd5612604485455fbe232d5c25624dc64a54078f96216efe60",
	            "SandboxKey": "/var/run/docker/netns/64d7250f44a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33349"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33350"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33353"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33351"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33352"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-902289": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:78:51:ee:fb:f8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "542e7a0b0cabf720b7fa60abc18e7d5229e71f392637e7183f5acfb2e30021af",
	                    "EndpointID": "d9d72e7e00275d7a31f50d6ef9ed6dff714a3598613eb5eb7d50188e1c18e66b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-902289",
	                        "9c843fb4ed6a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-902289 -n pause-902289
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-902289 -n pause-902289: exit status 2 (431.180401ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-902289 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-902289 logs -n 25: (1.574704318s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-507563 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl status docker --all --full --no-pager                                      │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl cat docker --no-pager                                                      │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo cat /etc/docker/daemon.json                                                          │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo docker system info                                                                   │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo cri-dockerd --version                                                                │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl cat containerd --no-pager                                                  │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo cat /etc/containerd/config.toml                                                      │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo containerd config dump                                                               │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl status crio --all --full --no-pager                                        │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl cat crio --no-pager                                                        │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo crio config                                                                          │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ delete  │ -p cilium-507563                                                                                           │ cilium-507563            │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │ 23 Nov 25 09:53 UTC │
	│ start   │ -p force-systemd-env-653569 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-653569 │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │ 23 Nov 25 09:54 UTC │
	│ start   │ -p pause-902289 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                           │ pause-902289             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │ 23 Nov 25 09:53 UTC │
	│ pause   │ -p pause-902289 --alsologtostderr -v=5                                                                     │ pause-902289             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ delete  │ -p force-systemd-env-653569                                                                                │ force-systemd-env-653569 │ jenkins │ v1.37.0 │ 23 Nov 25 09:54 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:53:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:53:31.557124  427587 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:53:31.557228  427587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:53:31.557234  427587 out.go:374] Setting ErrFile to fd 2...
	I1123 09:53:31.557238  427587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:53:31.557690  427587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:53:31.558169  427587 out.go:368] Setting JSON to false
	I1123 09:53:31.559324  427587 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9361,"bootTime":1763882251,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 09:53:31.559429  427587 start.go:143] virtualization:  
	I1123 09:53:31.562958  427587 out.go:179] * [pause-902289] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:53:31.567098  427587 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:53:31.567179  427587 notify.go:221] Checking for updates...
	I1123 09:53:31.570896  427587 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:53:31.573878  427587 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:53:31.576823  427587 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 09:53:31.580348  427587 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:53:31.583425  427587 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:53:31.586887  427587 config.go:182] Loaded profile config "pause-902289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:53:31.587529  427587 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:53:31.622434  427587 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:53:31.622539  427587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:53:31.720755  427587 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-23 09:53:31.7101115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:53:31.720860  427587 docker.go:319] overlay module found
	I1123 09:53:31.724080  427587 out.go:179] * Using the docker driver based on existing profile
	I1123 09:53:31.726839  427587 start.go:309] selected driver: docker
	I1123 09:53:31.726859  427587 start.go:927] validating driver "docker" against &{Name:pause-902289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-902289 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:53:31.727007  427587 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:53:31.727110  427587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:53:31.794299  427587 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-23 09:53:31.78398145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:53:31.794730  427587 cni.go:84] Creating CNI manager for ""
	I1123 09:53:31.794798  427587 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:53:31.794844  427587 start.go:353] cluster config:
	{Name:pause-902289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-902289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:53:31.797955  427587 out.go:179] * Starting "pause-902289" primary control-plane node in "pause-902289" cluster
	I1123 09:53:31.800821  427587 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:53:31.804708  427587 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:53:31.811278  427587 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:53:31.811322  427587 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 09:53:31.811333  427587 cache.go:65] Caching tarball of preloaded images
	I1123 09:53:31.811348  427587 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:53:31.811430  427587 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 09:53:31.811439  427587 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:53:31.811573  427587 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/config.json ...
	I1123 09:53:31.831990  427587 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:53:31.832013  427587 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:53:31.832027  427587 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:53:31.832059  427587 start.go:360] acquireMachinesLock for pause-902289: {Name:mkbb7d05b5e5c83fd1cc7ca4fd97992510c52c11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:53:31.832118  427587 start.go:364] duration metric: took 34.668µs to acquireMachinesLock for "pause-902289"
	I1123 09:53:31.832150  427587 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:53:31.832157  427587 fix.go:54] fixHost starting: 
	I1123 09:53:31.832510  427587 cli_runner.go:164] Run: docker container inspect pause-902289 --format={{.State.Status}}
	I1123 09:53:31.851600  427587 fix.go:112] recreateIfNeeded on pause-902289: state=Running err=<nil>
	W1123 09:53:31.851638  427587 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:53:29.190420  426617 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-653569:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.390072539s)
	I1123 09:53:29.190451  426617 kic.go:203] duration metric: took 4.39022885s to extract preloaded images to volume ...
	W1123 09:53:29.190582  426617 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 09:53:29.190676  426617 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:53:29.261590  426617 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-653569 --name force-systemd-env-653569 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-653569 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-653569 --network force-systemd-env-653569 --ip 192.168.85.2 --volume force-systemd-env-653569:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:53:29.639889  426617 cli_runner.go:164] Run: docker container inspect force-systemd-env-653569 --format={{.State.Running}}
	I1123 09:53:29.670047  426617 cli_runner.go:164] Run: docker container inspect force-systemd-env-653569 --format={{.State.Status}}
	I1123 09:53:29.693812  426617 cli_runner.go:164] Run: docker exec force-systemd-env-653569 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:53:29.754999  426617 oci.go:144] the created container "force-systemd-env-653569" has a running status.
	I1123 09:53:29.755032  426617 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/force-systemd-env-653569/id_rsa...
	I1123 09:53:30.320376  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/force-systemd-env-653569/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1123 09:53:30.320424  426617 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-282998/.minikube/machines/force-systemd-env-653569/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:53:30.339921  426617 cli_runner.go:164] Run: docker container inspect force-systemd-env-653569 --format={{.State.Status}}
	I1123 09:53:30.357595  426617 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:53:30.357620  426617 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-653569 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:53:30.413188  426617 cli_runner.go:164] Run: docker container inspect force-systemd-env-653569 --format={{.State.Status}}
	I1123 09:53:30.431175  426617 machine.go:94] provisionDockerMachine start ...
	I1123 09:53:30.431269  426617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-653569
	I1123 09:53:30.448963  426617 main.go:143] libmachine: Using SSH client type: native
	I1123 09:53:30.449337  426617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33369 <nil> <nil>}
	I1123 09:53:30.449352  426617 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:53:30.450133  426617 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48526->127.0.0.1:33369: read: connection reset by peer
	I1123 09:53:33.600946  426617 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-653569
	
	I1123 09:53:33.600969  426617 ubuntu.go:182] provisioning hostname "force-systemd-env-653569"
	I1123 09:53:33.601043  426617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-653569
	I1123 09:53:33.618286  426617 main.go:143] libmachine: Using SSH client type: native
	I1123 09:53:33.618593  426617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33369 <nil> <nil>}
	I1123 09:53:33.618610  426617 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-653569 && echo "force-systemd-env-653569" | sudo tee /etc/hostname
	I1123 09:53:33.782626  426617 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-653569
	
	I1123 09:53:33.782729  426617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-653569
	I1123 09:53:33.801196  426617 main.go:143] libmachine: Using SSH client type: native
	I1123 09:53:33.801634  426617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33369 <nil> <nil>}
	I1123 09:53:33.801660  426617 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-653569' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-653569/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-653569' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:53:33.957611  426617 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:53:33.957679  426617 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:53:33.957722  426617 ubuntu.go:190] setting up certificates
	I1123 09:53:33.957763  426617 provision.go:84] configureAuth start
	I1123 09:53:33.957845  426617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-653569
	I1123 09:53:33.975264  426617 provision.go:143] copyHostCerts
	I1123 09:53:33.975311  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:53:33.975343  426617 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:53:33.975350  426617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:53:33.975426  426617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:53:33.975510  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:53:33.975525  426617 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:53:33.975535  426617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:53:33.975561  426617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:53:33.975640  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:53:33.975655  426617 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:53:33.975660  426617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:53:33.975685  426617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:53:33.975738  426617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-653569 san=[127.0.0.1 192.168.85.2 force-systemd-env-653569 localhost minikube]
	I1123 09:53:34.314067  426617 provision.go:177] copyRemoteCerts
	I1123 09:53:34.314142  426617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:53:34.314189  426617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-653569
	I1123 09:53:34.334218  426617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/force-systemd-env-653569/id_rsa Username:docker}
	I1123 09:53:34.440854  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1123 09:53:34.440915  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:53:34.458331  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1123 09:53:34.458458  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1123 09:53:34.475851  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1123 09:53:34.475963  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:53:34.494236  426617 provision.go:87] duration metric: took 536.430104ms to configureAuth
	I1123 09:53:34.494268  426617 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:53:34.494463  426617 config.go:182] Loaded profile config "force-systemd-env-653569": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:53:34.494568  426617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-653569
	I1123 09:53:34.514490  426617 main.go:143] libmachine: Using SSH client type: native
	I1123 09:53:34.514821  426617 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33369 <nil> <nil>}
	I1123 09:53:34.514839  426617 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:53:34.829521  426617 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:53:34.829546  426617 machine.go:97] duration metric: took 4.398347078s to provisionDockerMachine
	I1123 09:53:34.829558  426617 client.go:176] duration metric: took 10.702635624s to LocalClient.Create
	I1123 09:53:34.829571  426617 start.go:167] duration metric: took 10.702699887s to libmachine.API.Create "force-systemd-env-653569"
	I1123 09:53:34.829579  426617 start.go:293] postStartSetup for "force-systemd-env-653569" (driver="docker")
	I1123 09:53:34.829589  426617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:53:34.829680  426617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:53:34.829726  426617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-653569
	I1123 09:53:34.847629  426617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/force-systemd-env-653569/id_rsa Username:docker}
	I1123 09:53:34.953752  426617 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:53:34.957029  426617 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:53:34.957059  426617 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:53:34.957072  426617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:53:34.957126  426617 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:53:34.957204  426617 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:53:34.957211  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /etc/ssl/certs/2849042.pem
	I1123 09:53:34.957305  426617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:53:34.964742  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:53:34.981982  426617 start.go:296] duration metric: took 152.389049ms for postStartSetup
	I1123 09:53:34.982366  426617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-653569
	I1123 09:53:34.999199  426617 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/config.json ...
	I1123 09:53:34.999509  426617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:53:34.999592  426617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-653569
	I1123 09:53:35.022596  426617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/force-systemd-env-653569/id_rsa Username:docker}
	I1123 09:53:35.126462  426617 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:53:35.131316  426617 start.go:128] duration metric: took 11.007857938s to createHost
	I1123 09:53:35.131342  426617 start.go:83] releasing machines lock for "force-systemd-env-653569", held for 11.007980179s
	I1123 09:53:35.131412  426617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-653569
	I1123 09:53:35.148018  426617 ssh_runner.go:195] Run: cat /version.json
	I1123 09:53:35.148034  426617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:53:35.148069  426617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-653569
	I1123 09:53:35.148090  426617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-653569
	I1123 09:53:35.168839  426617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/force-systemd-env-653569/id_rsa Username:docker}
	I1123 09:53:35.183066  426617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/force-systemd-env-653569/id_rsa Username:docker}
	I1123 09:53:35.362407  426617 ssh_runner.go:195] Run: systemctl --version
	I1123 09:53:35.368838  426617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:53:35.406619  426617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:53:35.411157  426617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:53:35.411235  426617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:53:35.441835  426617 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 09:53:35.441859  426617 start.go:496] detecting cgroup driver to use...
	I1123 09:53:35.441884  426617 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1123 09:53:35.441958  426617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:53:35.459986  426617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:53:35.473237  426617 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:53:35.473308  426617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:53:35.491636  426617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:53:35.510920  426617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:53:35.623807  426617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:53:35.751804  426617 docker.go:234] disabling docker service ...
	I1123 09:53:35.751912  426617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:53:35.772015  426617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:53:35.785651  426617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:53:35.914527  426617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:53:36.045100  426617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:53:36.058464  426617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:53:36.072514  426617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:53:36.072591  426617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:36.081467  426617 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:53:36.081550  426617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:36.090779  426617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:36.099759  426617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:36.108724  426617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:53:36.117347  426617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:36.126298  426617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:36.139786  426617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:36.148648  426617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:53:36.156380  426617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:53:36.164105  426617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:53:36.271091  426617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:53:36.442746  426617 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:53:36.442817  426617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:53:36.447025  426617 start.go:564] Will wait 60s for crictl version
	I1123 09:53:36.447091  426617 ssh_runner.go:195] Run: which crictl
	I1123 09:53:36.450723  426617 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:53:36.479608  426617 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:53:36.479693  426617 ssh_runner.go:195] Run: crio --version
	I1123 09:53:36.511393  426617 ssh_runner.go:195] Run: crio --version
	I1123 09:53:36.545093  426617 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:53:31.854723  427587 out.go:252] * Updating the running docker "pause-902289" container ...
	I1123 09:53:31.854765  427587 machine.go:94] provisionDockerMachine start ...
	I1123 09:53:31.854855  427587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-902289
	I1123 09:53:31.872345  427587 main.go:143] libmachine: Using SSH client type: native
	I1123 09:53:31.872675  427587 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33349 <nil> <nil>}
	I1123 09:53:31.872693  427587 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:53:32.025329  427587 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-902289
	
	I1123 09:53:32.025357  427587 ubuntu.go:182] provisioning hostname "pause-902289"
	I1123 09:53:32.025455  427587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-902289
	I1123 09:53:32.046672  427587 main.go:143] libmachine: Using SSH client type: native
	I1123 09:53:32.047030  427587 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33349 <nil> <nil>}
	I1123 09:53:32.047047  427587 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-902289 && echo "pause-902289" | sudo tee /etc/hostname
	I1123 09:53:32.206815  427587 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-902289
	
	I1123 09:53:32.206898  427587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-902289
	I1123 09:53:32.224746  427587 main.go:143] libmachine: Using SSH client type: native
	I1123 09:53:32.225070  427587 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33349 <nil> <nil>}
	I1123 09:53:32.225093  427587 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-902289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-902289/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-902289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:53:32.377822  427587 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:53:32.377847  427587 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 09:53:32.377869  427587 ubuntu.go:190] setting up certificates
	I1123 09:53:32.377879  427587 provision.go:84] configureAuth start
	I1123 09:53:32.377937  427587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-902289
	I1123 09:53:32.398117  427587 provision.go:143] copyHostCerts
	I1123 09:53:32.398187  427587 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 09:53:32.398209  427587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 09:53:32.398288  427587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 09:53:32.398402  427587 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 09:53:32.398414  427587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 09:53:32.398444  427587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 09:53:32.398514  427587 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 09:53:32.398524  427587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 09:53:32.398553  427587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 09:53:32.398613  427587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.pause-902289 san=[127.0.0.1 192.168.76.2 localhost minikube pause-902289]
	I1123 09:53:32.573390  427587 provision.go:177] copyRemoteCerts
	I1123 09:53:32.573469  427587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:53:32.573520  427587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-902289
	I1123 09:53:32.591302  427587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33349 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/pause-902289/id_rsa Username:docker}
	I1123 09:53:32.697378  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:53:32.715374  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:53:32.732899  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 09:53:32.750485  427587 provision.go:87] duration metric: took 372.57227ms to configureAuth
	I1123 09:53:32.750514  427587 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:53:32.750763  427587 config.go:182] Loaded profile config "pause-902289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:53:32.750875  427587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-902289
	I1123 09:53:32.768727  427587 main.go:143] libmachine: Using SSH client type: native
	I1123 09:53:32.769052  427587 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33349 <nil> <nil>}
	I1123 09:53:32.769070  427587 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:53:36.547941  426617 cli_runner.go:164] Run: docker network inspect force-systemd-env-653569 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:53:36.563894  426617 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 09:53:36.567974  426617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:53:36.577470  426617 kubeadm.go:884] updating cluster {Name:force-systemd-env-653569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-653569 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:53:36.577588  426617 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:53:36.577644  426617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:53:36.608417  426617 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:53:36.608443  426617 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:53:36.608497  426617 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:53:36.633691  426617 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:53:36.633715  426617 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:53:36.633723  426617 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 09:53:36.633824  426617 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-653569 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-653569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:53:36.633910  426617 ssh_runner.go:195] Run: crio config
	I1123 09:53:36.707150  426617 cni.go:84] Creating CNI manager for ""
	I1123 09:53:36.707173  426617 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:53:36.707194  426617 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:53:36.707217  426617 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-653569 NodeName:force-systemd-env-653569 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:53:36.707343  426617 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-653569"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:53:36.707415  426617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:53:36.715226  426617 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:53:36.715308  426617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:53:36.722713  426617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1123 09:53:36.735376  426617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:53:36.748017  426617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1123 09:53:36.760492  426617 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:53:36.763999  426617 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:53:36.773793  426617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:53:36.897154  426617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:53:36.913365  426617 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569 for IP: 192.168.85.2
	I1123 09:53:36.913514  426617 certs.go:195] generating shared ca certs ...
	I1123 09:53:36.913548  426617 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:53:36.913711  426617 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:53:36.913789  426617 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:53:36.913813  426617 certs.go:257] generating profile certs ...
	I1123 09:53:36.913888  426617 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/client.key
	I1123 09:53:36.913918  426617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/client.crt with IP's: []
	I1123 09:53:37.303192  426617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/client.crt ...
	I1123 09:53:37.303226  426617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/client.crt: {Name:mkb23374593d37d5425e8410031d6c8a04d48c74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:53:37.303430  426617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/client.key ...
	I1123 09:53:37.303445  426617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/client.key: {Name:mk112ef570809dadb2fb64016dc0ec67e182ad7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:53:37.303550  426617 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.key.e64425a5
	I1123 09:53:37.303569  426617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.crt.e64425a5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 09:53:37.734373  426617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.crt.e64425a5 ...
	I1123 09:53:37.734406  426617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.crt.e64425a5: {Name:mk498e923c5a0897a59642bf7496d3f425bc3124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:53:37.734598  426617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.key.e64425a5 ...
	I1123 09:53:37.734613  426617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.key.e64425a5: {Name:mk07a817c3baa85163ff6426ebab5f484d3c51b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:53:37.734699  426617 certs.go:382] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.crt.e64425a5 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.crt
	I1123 09:53:37.734779  426617 certs.go:386] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.key.e64425a5 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.key
	I1123 09:53:37.734844  426617 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/proxy-client.key
	I1123 09:53:37.734870  426617 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/proxy-client.crt with IP's: []
	I1123 09:53:38.023683  426617 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/proxy-client.crt ...
	I1123 09:53:38.023717  426617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/proxy-client.crt: {Name:mke4db8b4152d9d466a266d462150b43b8109521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:53:38.023932  426617 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/proxy-client.key ...
	I1123 09:53:38.023950  426617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/proxy-client.key: {Name:mkcf92e7bc30568b2de37a8e27353539ee8084fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:53:38.024050  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1123 09:53:38.024077  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1123 09:53:38.024095  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1123 09:53:38.024113  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1123 09:53:38.024131  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1123 09:53:38.024158  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1123 09:53:38.024172  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1123 09:53:38.024189  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1123 09:53:38.024259  426617 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:53:38.024304  426617 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:53:38.024316  426617 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:53:38.024344  426617 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:53:38.024376  426617 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:53:38.024410  426617 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:53:38.024467  426617 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:53:38.024506  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem -> /usr/share/ca-certificates/284904.pem
	I1123 09:53:38.024523  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> /usr/share/ca-certificates/2849042.pem
	I1123 09:53:38.024540  426617 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:53:38.025101  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:53:38.056762  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:53:38.078587  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:53:38.104875  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:53:38.131112  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1123 09:53:38.153489  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:53:38.174726  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:53:38.197157  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:53:38.222524  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:53:38.244751  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:53:38.263676  426617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:53:38.286224  426617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:53:38.299220  426617 ssh_runner.go:195] Run: openssl version
	I1123 09:53:38.305230  426617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:53:38.313882  426617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:53:38.317570  426617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:53:38.317636  426617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:53:38.370999  426617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:53:38.385799  426617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:53:38.397882  426617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:53:38.402520  426617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:53:38.402595  426617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:53:38.466821  426617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:53:38.475633  426617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:53:38.484184  426617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:53:38.487693  426617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:53:38.487762  426617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:53:38.532890  426617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:53:38.550665  426617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:53:38.558615  426617 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:53:38.558712  426617 kubeadm.go:401] StartCluster: {Name:force-systemd-env-653569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-653569 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:53:38.558797  426617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:53:38.558856  426617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:53:38.615075  426617 cri.go:89] found id: ""
	I1123 09:53:38.615150  426617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:53:38.628947  426617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:53:38.637218  426617 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:53:38.637298  426617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:53:38.646151  426617 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:53:38.646174  426617 kubeadm.go:158] found existing configuration files:
	
	I1123 09:53:38.646227  426617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 09:53:38.656431  426617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:53:38.656507  426617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:53:38.664549  426617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 09:53:38.673399  426617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:53:38.673494  426617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:53:38.685482  426617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 09:53:38.694729  426617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:53:38.694802  426617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:53:38.704122  426617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 09:53:38.713028  426617 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:53:38.713157  426617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:53:38.723375  426617 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:53:38.770878  426617 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 09:53:38.770988  426617 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 09:53:38.799667  426617 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 09:53:38.799739  426617 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 09:53:38.799775  426617 kubeadm.go:319] OS: Linux
	I1123 09:53:38.799821  426617 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 09:53:38.799869  426617 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 09:53:38.799925  426617 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 09:53:38.799974  426617 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 09:53:38.800022  426617 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 09:53:38.800071  426617 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 09:53:38.800117  426617 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 09:53:38.800172  426617 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 09:53:38.800218  426617 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 09:53:38.896865  426617 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 09:53:38.897212  426617 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 09:53:38.897325  426617 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 09:53:38.906985  426617 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 09:53:38.190292  427587 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:53:38.190313  427587 machine.go:97] duration metric: took 6.335538926s to provisionDockerMachine
	I1123 09:53:38.190325  427587 start.go:293] postStartSetup for "pause-902289" (driver="docker")
	I1123 09:53:38.190337  427587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:53:38.190403  427587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:53:38.190450  427587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-902289
	I1123 09:53:38.215977  427587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33349 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/pause-902289/id_rsa Username:docker}
	I1123 09:53:38.325771  427587 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:53:38.329965  427587 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:53:38.329992  427587 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:53:38.330003  427587 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 09:53:38.330059  427587 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 09:53:38.330137  427587 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 09:53:38.330236  427587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:53:38.338314  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:53:38.366603  427587 start.go:296] duration metric: took 176.251647ms for postStartSetup
	I1123 09:53:38.366737  427587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:53:38.366811  427587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-902289
	I1123 09:53:38.393760  427587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33349 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/pause-902289/id_rsa Username:docker}
	I1123 09:53:38.503904  427587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:53:38.510282  427587 fix.go:56] duration metric: took 6.678112222s for fixHost
	I1123 09:53:38.510310  427587 start.go:83] releasing machines lock for "pause-902289", held for 6.67817889s
	I1123 09:53:38.510404  427587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-902289
	I1123 09:53:38.532176  427587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:53:38.532266  427587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-902289
	I1123 09:53:38.532524  427587 ssh_runner.go:195] Run: cat /version.json
	I1123 09:53:38.532567  427587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-902289
	I1123 09:53:38.558224  427587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33349 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/pause-902289/id_rsa Username:docker}
	I1123 09:53:38.569620  427587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33349 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/pause-902289/id_rsa Username:docker}
	I1123 09:53:38.681508  427587 ssh_runner.go:195] Run: systemctl --version
	I1123 09:53:38.781753  427587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:53:38.848783  427587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:53:38.854119  427587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:53:38.854194  427587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:53:38.863765  427587 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:53:38.863791  427587 start.go:496] detecting cgroup driver to use...
	I1123 09:53:38.863823  427587 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 09:53:38.863873  427587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:53:38.880966  427587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:53:38.897019  427587 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:53:38.897093  427587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:53:38.917624  427587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:53:38.931448  427587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:53:39.112051  427587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:53:39.296009  427587 docker.go:234] disabling docker service ...
	I1123 09:53:39.296086  427587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:53:39.315306  427587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:53:39.331942  427587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:53:39.511743  427587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:53:39.691401  427587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:53:39.707179  427587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:53:39.723224  427587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:53:39.723346  427587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:39.733252  427587 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 09:53:39.733397  427587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:39.744538  427587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:39.754480  427587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:39.764532  427587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:53:39.774498  427587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:39.783893  427587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:39.792953  427587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:53:39.802405  427587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:53:39.811301  427587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:53:39.819686  427587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:53:40.023621  427587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:53:40.537472  427587 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:53:40.537631  427587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:53:40.541896  427587 start.go:564] Will wait 60s for crictl version
	I1123 09:53:40.541976  427587 ssh_runner.go:195] Run: which crictl
	I1123 09:53:40.545733  427587 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:53:40.589298  427587 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:53:40.589384  427587 ssh_runner.go:195] Run: crio --version
	I1123 09:53:40.625287  427587 ssh_runner.go:195] Run: crio --version
	I1123 09:53:40.677492  427587 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:53:40.680596  427587 cli_runner.go:164] Run: docker network inspect pause-902289 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:53:40.709719  427587 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 09:53:40.714495  427587 kubeadm.go:884] updating cluster {Name:pause-902289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-902289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:53:40.714655  427587 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:53:40.714716  427587 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:53:40.843057  427587 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:53:40.843082  427587 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:53:40.843134  427587 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:53:40.972747  427587 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:53:40.972771  427587 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:53:40.972780  427587 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 09:53:40.972878  427587 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-902289 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-902289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:53:40.972957  427587 ssh_runner.go:195] Run: crio config
	I1123 09:53:41.145972  427587 cni.go:84] Creating CNI manager for ""
	I1123 09:53:41.145998  427587 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:53:41.146013  427587 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:53:41.146036  427587 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-902289 NodeName:pause-902289 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:53:41.146167  427587 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-902289"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:53:41.146238  427587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:53:41.172056  427587 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:53:41.172125  427587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:53:41.253932  427587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1123 09:53:41.309097  427587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:53:41.337148  427587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1123 09:53:41.361997  427587 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:53:41.370078  427587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:53:41.736632  427587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:53:41.787071  427587 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289 for IP: 192.168.76.2
	I1123 09:53:41.787090  427587 certs.go:195] generating shared ca certs ...
	I1123 09:53:41.787105  427587 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:53:41.787239  427587 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 09:53:41.787277  427587 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 09:53:41.787295  427587 certs.go:257] generating profile certs ...
	I1123 09:53:41.787382  427587 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/client.key
	I1123 09:53:41.787456  427587 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/apiserver.key.a9bff26b
	I1123 09:53:41.787498  427587 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/proxy-client.key
	I1123 09:53:41.787606  427587 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 09:53:41.787640  427587 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 09:53:41.787647  427587 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:53:41.787677  427587 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:53:41.787718  427587 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:53:41.787748  427587 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 09:53:41.787793  427587 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 09:53:41.788401  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:53:41.862179  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:53:41.993965  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:53:42.055310  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 09:53:42.090857  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 09:53:42.133614  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:53:42.175402  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:53:42.243401  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:53:42.337901  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 09:53:42.384141  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:53:42.432559  427587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 09:53:42.524807  427587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:53:42.576716  427587 ssh_runner.go:195] Run: openssl version
	I1123 09:53:42.607683  427587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 09:53:42.643301  427587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 09:53:42.650015  427587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 09:53:42.650094  427587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 09:53:42.746624  427587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 09:53:42.761549  427587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 09:53:42.808272  427587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 09:53:42.812668  427587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 09:53:42.812739  427587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 09:53:42.897769  427587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:53:42.910940  427587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:53:42.925116  427587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:53:42.938080  427587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:53:42.938157  427587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:53:43.037658  427587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:53:43.053119  427587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:53:43.059307  427587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:53:43.150355  427587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:53:43.207873  427587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:53:43.280570  427587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:53:43.366447  427587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:53:43.443568  427587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:53:43.540460  427587 kubeadm.go:401] StartCluster: {Name:pause-902289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-902289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:53:43.540584  427587 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:53:43.540658  427587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:53:43.612682  427587 cri.go:89] found id: "656a67f7696253ff07fee5935f113bf2aab9c31a82f76613d0a52bb745cf02e3"
	I1123 09:53:43.612706  427587 cri.go:89] found id: "e2c772377726bd8b3beea2efc62f66e9cfb85a568feb56ef5d6c49a797734800"
	I1123 09:53:43.612711  427587 cri.go:89] found id: "28a462431337106b621a0e2e0ebeda0f9205283900c74c993b5b8f1e0ab5751b"
	I1123 09:53:43.612714  427587 cri.go:89] found id: "891c5da2b2cf8e87bd8f24a275f47debf482222a675ab0960fad8dd9ee882ab2"
	I1123 09:53:43.612717  427587 cri.go:89] found id: "301fb617b1f960338a688814750032b12882ded15eac0506bfd49ddf0934870b"
	I1123 09:53:43.612721  427587 cri.go:89] found id: "924727f67067568042a015bc3e901ad4ad44c23a740962980e1d770157ccd349"
	I1123 09:53:43.612724  427587 cri.go:89] found id: "e6a3434aae7399365305f02fe70d5f6ea51d903da9bc3be6ddc186ca7434c593"
	I1123 09:53:43.612727  427587 cri.go:89] found id: "4bb5df37d1031824f5c4150f63585d202677be311760ed8886913f82f675b2d2"
	I1123 09:53:43.612730  427587 cri.go:89] found id: "a4f1980bab92b13afddf2474d8b4b5b8b53f0cd0a64c295106b26ae3db1103af"
	I1123 09:53:43.612738  427587 cri.go:89] found id: "21b7e1366f30e127ba37cbd9bc0a22fc7073ee77f7eb6a86efe280ea69f595b0"
	I1123 09:53:43.612741  427587 cri.go:89] found id: "b6aece6a157139e982fe1e6ec7e327f9f62fa96ac78ffcbccff18c993426e2a5"
	I1123 09:53:43.612746  427587 cri.go:89] found id: "00f2d513b8a4eeb15f37c571edaf256e1cc41499c119fb407d7ad17fb1c4e582"
	I1123 09:53:43.612753  427587 cri.go:89] found id: "9f818fc66635cc44eeb47e4207008d4a814ac58d7495df544d7c6550de4cfd40"
	I1123 09:53:43.612756  427587 cri.go:89] found id: "f9cfd21effcadf8269de4c91c08df2b43305336549c8f0bd07926f49473ef1dd"
	I1123 09:53:43.612759  427587 cri.go:89] found id: ""
	I1123 09:53:43.612807  427587 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:53:43.646706  427587 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:53:43Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:53:43.646777  427587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:53:43.666423  427587 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:53:43.666444  427587 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:53:43.666495  427587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:53:43.685003  427587 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:53:43.685536  427587 kubeconfig.go:125] found "pause-902289" server: "https://192.168.76.2:8443"
	I1123 09:53:43.686069  427587 kapi.go:59] client config for pause-902289: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:53:43.686566  427587 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 09:53:43.686587  427587 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 09:53:43.686593  427587 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 09:53:43.686598  427587 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 09:53:43.686602  427587 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 09:53:43.686853  427587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:53:43.706299  427587 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 09:53:43.706334  427587 kubeadm.go:602] duration metric: took 39.883829ms to restartPrimaryControlPlane
	I1123 09:53:43.706343  427587 kubeadm.go:403] duration metric: took 165.8927ms to StartCluster
	I1123 09:53:43.706358  427587 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:53:43.706433  427587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:53:43.707102  427587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:53:43.707328  427587 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:53:43.707662  427587 config.go:182] Loaded profile config "pause-902289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:53:43.707710  427587 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:53:43.714407  427587 out.go:179] * Enabled addons: 
	I1123 09:53:43.714470  427587 out.go:179] * Verifying Kubernetes components...
	I1123 09:53:38.910215  426617 out.go:252]   - Generating certificates and keys ...
	I1123 09:53:38.910381  426617 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 09:53:38.910499  426617 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 09:53:39.278702  426617 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 09:53:40.089236  426617 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 09:53:40.515419  426617 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 09:53:40.671356  426617 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 09:53:42.405291  426617 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 09:53:42.405470  426617 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-653569 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 09:53:42.899741  426617 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 09:53:42.900280  426617 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-653569 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 09:53:43.409760  426617 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:53:43.717142  427587 addons.go:530] duration metric: took 9.432547ms for enable addons: enabled=[]
	I1123 09:53:43.717224  427587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:53:44.079142  427587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:53:44.093944  427587 node_ready.go:35] waiting up to 6m0s for node "pause-902289" to be "Ready" ...
	I1123 09:53:43.953828  426617 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 09:53:44.062380  426617 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:53:44.062464  426617 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:53:44.847393  426617 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:53:46.168600  426617 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:53:46.759412  426617 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:53:46.982663  426617 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:53:48.037785  426617 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:53:48.037885  426617 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:53:48.038515  426617 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:53:48.041997  426617 out.go:252]   - Booting up control plane ...
	I1123 09:53:48.042100  426617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:53:48.045794  426617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:53:48.049790  426617 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:53:48.089847  426617 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:53:48.089956  426617 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:53:48.101881  426617 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:53:48.101981  426617 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:53:48.102020  426617 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:53:48.314787  426617 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:53:48.314909  426617 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 09:53:48.886086  427587 node_ready.go:49] node "pause-902289" is "Ready"
	I1123 09:53:48.886123  427587 node_ready.go:38] duration metric: took 4.792146389s for node "pause-902289" to be "Ready" ...
	I1123 09:53:48.886138  427587 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:53:48.886199  427587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:53:48.931065  427587 api_server.go:72] duration metric: took 5.223703636s to wait for apiserver process to appear ...
	I1123 09:53:48.931149  427587 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:53:48.931184  427587 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:53:48.996148  427587 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:53:48.996191  427587 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:53:49.431706  427587 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:53:49.449295  427587 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:53:49.449385  427587 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:53:49.932003  427587 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:53:49.977350  427587 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:53:49.977442  427587 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:53:50.431690  427587 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:53:50.446112  427587 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:53:50.450401  427587 api_server.go:141] control plane version: v1.34.1
	I1123 09:53:50.450475  427587 api_server.go:131] duration metric: took 1.519304807s to wait for apiserver health ...
	I1123 09:53:50.450511  427587 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:53:50.463245  427587 system_pods.go:59] 7 kube-system pods found
	I1123 09:53:50.463291  427587 system_pods.go:61] "coredns-66bc5c9577-94mmp" [60daea3c-96b0-4122-adef-f228835ee2df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:53:50.463302  427587 system_pods.go:61] "etcd-pause-902289" [2eabc0ea-91a0-452b-aa60-4614414b3481] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:53:50.463315  427587 system_pods.go:61] "kindnet-xmfwf" [dd9e7594-d56b-4dc1-bf62-ab12f3d30214] Running
	I1123 09:53:50.463323  427587 system_pods.go:61] "kube-apiserver-pause-902289" [0b08278e-1b51-4418-a524-07289c1de1f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:53:50.463332  427587 system_pods.go:61] "kube-controller-manager-pause-902289" [110d75f3-745d-474b-a151-73c92537ad82] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:53:50.463341  427587 system_pods.go:61] "kube-proxy-55824" [bebd08c2-99f4-4417-a511-ab1014ed8137] Running
	I1123 09:53:50.463347  427587 system_pods.go:61] "kube-scheduler-pause-902289" [5bb32703-06dc-49e5-af9f-3efb04eab750] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:53:50.463352  427587 system_pods.go:74] duration metric: took 12.818238ms to wait for pod list to return data ...
	I1123 09:53:50.463360  427587 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:53:50.471702  427587 default_sa.go:45] found service account: "default"
	I1123 09:53:50.471792  427587 default_sa.go:55] duration metric: took 8.424951ms for default service account to be created ...
	I1123 09:53:50.471842  427587 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:53:50.482774  427587 system_pods.go:86] 7 kube-system pods found
	I1123 09:53:50.482859  427587 system_pods.go:89] "coredns-66bc5c9577-94mmp" [60daea3c-96b0-4122-adef-f228835ee2df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:53:50.482884  427587 system_pods.go:89] "etcd-pause-902289" [2eabc0ea-91a0-452b-aa60-4614414b3481] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:53:50.482923  427587 system_pods.go:89] "kindnet-xmfwf" [dd9e7594-d56b-4dc1-bf62-ab12f3d30214] Running
	I1123 09:53:50.482950  427587 system_pods.go:89] "kube-apiserver-pause-902289" [0b08278e-1b51-4418-a524-07289c1de1f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:53:50.482972  427587 system_pods.go:89] "kube-controller-manager-pause-902289" [110d75f3-745d-474b-a151-73c92537ad82] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:53:50.483011  427587 system_pods.go:89] "kube-proxy-55824" [bebd08c2-99f4-4417-a511-ab1014ed8137] Running
	I1123 09:53:50.483037  427587 system_pods.go:89] "kube-scheduler-pause-902289" [5bb32703-06dc-49e5-af9f-3efb04eab750] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:53:50.483065  427587 system_pods.go:126] duration metric: took 11.204076ms to wait for k8s-apps to be running ...
	I1123 09:53:50.483099  427587 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:53:50.483198  427587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:53:50.505517  427587 system_svc.go:56] duration metric: took 22.388329ms WaitForService to wait for kubelet
	I1123 09:53:50.505599  427587 kubeadm.go:587] duration metric: took 6.798242803s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:53:50.505635  427587 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:53:50.509394  427587 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:53:50.509497  427587 node_conditions.go:123] node cpu capacity is 2
	I1123 09:53:50.509534  427587 node_conditions.go:105] duration metric: took 3.877267ms to run NodePressure ...
	I1123 09:53:50.509573  427587 start.go:242] waiting for startup goroutines ...
	I1123 09:53:50.509597  427587 start.go:247] waiting for cluster config update ...
	I1123 09:53:50.509618  427587 start.go:256] writing updated cluster config ...
	I1123 09:53:50.510027  427587 ssh_runner.go:195] Run: rm -f paused
	I1123 09:53:50.514148  427587 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:53:50.514843  427587 kapi.go:59] client config for pause-902289: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:53:50.561395  427587 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-94mmp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:53:50.319865  426617 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.005345555s
	I1123 09:53:50.323230  426617 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 09:53:50.323326  426617 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 09:53:50.323631  426617 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 09:53:50.323717  426617 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1123 09:53:52.571647  427587 pod_ready.go:104] pod "coredns-66bc5c9577-94mmp" is not "Ready", error: <nil>
	I1123 09:53:53.070814  427587 pod_ready.go:94] pod "coredns-66bc5c9577-94mmp" is "Ready"
	I1123 09:53:53.070893  427587 pod_ready.go:86] duration metric: took 2.509381985s for pod "coredns-66bc5c9577-94mmp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:53:53.078340  427587 pod_ready.go:83] waiting for pod "etcd-pause-902289" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:53:55.083896  427587 pod_ready.go:104] pod "etcd-pause-902289" is not "Ready", error: <nil>
	I1123 09:53:54.851429  426617 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.527683964s
	I1123 09:53:55.802210  426617 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.47893351s
	I1123 09:53:57.827300  426617 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.503764862s
	I1123 09:53:57.851840  426617 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:53:57.866563  426617 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:53:57.884025  426617 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:53:57.884249  426617 kubeadm.go:319] [mark-control-plane] Marking the node force-systemd-env-653569 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:53:57.905304  426617 kubeadm.go:319] [bootstrap-token] Using token: sm7e3f.2j7219sghtansujh
	I1123 09:53:57.908410  426617 out.go:252]   - Configuring RBAC rules ...
	I1123 09:53:57.908546  426617 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:53:57.915889  426617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:53:57.923832  426617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:53:57.929015  426617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:53:57.933886  426617 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:53:57.938195  426617 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:53:58.235563  426617 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:53:58.715109  426617 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	W1123 09:53:57.084046  427587 pod_ready.go:104] pod "etcd-pause-902289" is not "Ready", error: <nil>
	I1123 09:53:58.084227  427587 pod_ready.go:94] pod "etcd-pause-902289" is "Ready"
	I1123 09:53:58.084253  427587 pod_ready.go:86] duration metric: took 5.005886909s for pod "etcd-pause-902289" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:53:58.086954  427587 pod_ready.go:83] waiting for pod "kube-apiserver-pause-902289" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:53:58.091947  427587 pod_ready.go:94] pod "kube-apiserver-pause-902289" is "Ready"
	I1123 09:53:58.091976  427587 pod_ready.go:86] duration metric: took 4.993155ms for pod "kube-apiserver-pause-902289" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:53:58.094399  427587 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-902289" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:53:58.099540  427587 pod_ready.go:94] pod "kube-controller-manager-pause-902289" is "Ready"
	I1123 09:53:58.099570  427587 pod_ready.go:86] duration metric: took 5.143385ms for pod "kube-controller-manager-pause-902289" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:53:58.102078  427587 pod_ready.go:83] waiting for pod "kube-proxy-55824" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:53:58.282636  427587 pod_ready.go:94] pod "kube-proxy-55824" is "Ready"
	I1123 09:53:58.282660  427587 pod_ready.go:86] duration metric: took 180.553535ms for pod "kube-proxy-55824" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:53:58.483274  427587 pod_ready.go:83] waiting for pod "kube-scheduler-pause-902289" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:53:58.882070  427587 pod_ready.go:94] pod "kube-scheduler-pause-902289" is "Ready"
	I1123 09:53:58.882102  427587 pod_ready.go:86] duration metric: took 398.803106ms for pod "kube-scheduler-pause-902289" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:53:58.882115  427587 pod_ready.go:40] duration metric: took 8.367888203s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:53:58.948591  427587 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:53:58.953683  427587 out.go:179] * Done! kubectl is now configured to use "pause-902289" cluster and "default" namespace by default
	I1123 09:53:59.235509  426617 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:53:59.237302  426617 kubeadm.go:319] 
	I1123 09:53:59.237387  426617 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:53:59.237398  426617 kubeadm.go:319] 
	I1123 09:53:59.237505  426617 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:53:59.237515  426617 kubeadm.go:319] 
	I1123 09:53:59.237541  426617 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:53:59.237626  426617 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:53:59.237686  426617 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:53:59.237691  426617 kubeadm.go:319] 
	I1123 09:53:59.237745  426617 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:53:59.237749  426617 kubeadm.go:319] 
	I1123 09:53:59.237796  426617 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:53:59.237800  426617 kubeadm.go:319] 
	I1123 09:53:59.237852  426617 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:53:59.237928  426617 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:53:59.237996  426617 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:53:59.238000  426617 kubeadm.go:319] 
	I1123 09:53:59.238084  426617 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:53:59.238161  426617 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:53:59.238165  426617 kubeadm.go:319] 
	I1123 09:53:59.238249  426617 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token sm7e3f.2j7219sghtansujh \
	I1123 09:53:59.238353  426617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e \
	I1123 09:53:59.238381  426617 kubeadm.go:319] 	--control-plane 
	I1123 09:53:59.238385  426617 kubeadm.go:319] 
	I1123 09:53:59.238470  426617 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:53:59.238473  426617 kubeadm.go:319] 
	I1123 09:53:59.238555  426617 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token sm7e3f.2j7219sghtansujh \
	I1123 09:53:59.238657  426617 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e 
	I1123 09:53:59.241966  426617 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 09:53:59.242219  426617 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 09:53:59.242324  426617 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:53:59.242341  426617 cni.go:84] Creating CNI manager for ""
	I1123 09:53:59.242349  426617 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:53:59.245462  426617 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:53:59.248462  426617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:53:59.252774  426617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:53:59.252793  426617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:53:59.267786  426617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:53:59.799110  426617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:53:59.799257  426617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:53:59.799348  426617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes force-systemd-env-653569 minikube.k8s.io/updated_at=2025_11_23T09_53_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=force-systemd-env-653569 minikube.k8s.io/primary=true
	I1123 09:53:59.939466  426617 ops.go:34] apiserver oom_adj: -16
	I1123 09:53:59.939497  426617 kubeadm.go:1114] duration metric: took 140.306361ms to wait for elevateKubeSystemPrivileges
	I1123 09:53:59.939519  426617 kubeadm.go:403] duration metric: took 21.380810599s to StartCluster
	I1123 09:53:59.939535  426617 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:53:59.939599  426617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:53:59.940599  426617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:53:59.940813  426617 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:53:59.940976  426617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:53:59.941228  426617 config.go:182] Loaded profile config "force-systemd-env-653569": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:53:59.941260  426617 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:53:59.941315  426617 addons.go:70] Setting storage-provisioner=true in profile "force-systemd-env-653569"
	I1123 09:53:59.941329  426617 addons.go:239] Setting addon storage-provisioner=true in "force-systemd-env-653569"
	I1123 09:53:59.941361  426617 host.go:66] Checking if "force-systemd-env-653569" exists ...
	I1123 09:53:59.941888  426617 addons.go:70] Setting default-storageclass=true in profile "force-systemd-env-653569"
	I1123 09:53:59.941908  426617 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-env-653569"
	I1123 09:53:59.942167  426617 cli_runner.go:164] Run: docker container inspect force-systemd-env-653569 --format={{.State.Status}}
	I1123 09:53:59.942420  426617 cli_runner.go:164] Run: docker container inspect force-systemd-env-653569 --format={{.State.Status}}
	I1123 09:53:59.947496  426617 out.go:179] * Verifying Kubernetes components...
	I1123 09:53:59.950694  426617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:53:59.981355  426617 kapi.go:59] client config for force-systemd-env-653569: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:53:59.983069  426617 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:53:59.983710  426617 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 09:53:59.983726  426617 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 09:53:59.983732  426617 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 09:53:59.983736  426617 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 09:53:59.983741  426617 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 09:53:59.983981  426617 addons.go:239] Setting addon default-storageclass=true in "force-systemd-env-653569"
	I1123 09:53:59.984034  426617 host.go:66] Checking if "force-systemd-env-653569" exists ...
	I1123 09:53:59.984573  426617 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1123 09:53:59.984725  426617 cli_runner.go:164] Run: docker container inspect force-systemd-env-653569 --format={{.State.Status}}
	I1123 09:53:59.986572  426617 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:53:59.986591  426617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:53:59.986645  426617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-653569
	I1123 09:54:00.011027  426617 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:54:00.011057  426617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:54:00.011127  426617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-653569
	I1123 09:54:00.045708  426617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/force-systemd-env-653569/id_rsa Username:docker}
	I1123 09:54:00.060964  426617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/force-systemd-env-653569/id_rsa Username:docker}
	I1123 09:54:00.421398  426617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:54:00.461299  426617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:54:00.465338  426617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:54:00.465496  426617 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:54:01.092080  426617 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 09:54:01.092248  426617 kapi.go:59] client config for force-systemd-env-653569: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:54:01.092526  426617 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:54:01.092576  426617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:54:01.092718  426617 kapi.go:59] client config for force-systemd-env-653569: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/profiles/force-systemd-env-653569/client.key", CAFile:"/home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:54:01.097252  426617 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 09:54:01.100328  426617 addons.go:530] duration metric: took 1.159056376s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 09:54:01.109522  426617 api_server.go:72] duration metric: took 1.168562582s to wait for apiserver process to appear ...
	I1123 09:54:01.109551  426617 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:54:01.109575  426617 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 09:54:01.127107  426617 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 09:54:01.128363  426617 api_server.go:141] control plane version: v1.34.1
	I1123 09:54:01.128390  426617 api_server.go:131] duration metric: took 18.831009ms to wait for apiserver health ...
	I1123 09:54:01.128399  426617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:54:01.131871  426617 system_pods.go:59] 5 kube-system pods found
	I1123 09:54:01.131912  426617 system_pods.go:61] "etcd-force-systemd-env-653569" [0cb3bbe3-a201-4d44-a6e9-2ab5692aeccb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:54:01.131937  426617 system_pods.go:61] "kube-apiserver-force-systemd-env-653569" [4fdad6f5-6659-4780-95a3-3e55aaceba65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:54:01.131972  426617 system_pods.go:61] "kube-controller-manager-force-systemd-env-653569" [f02b1e43-b9e6-4200-b73d-af73a7e30aad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:54:01.131991  426617 system_pods.go:61] "kube-scheduler-force-systemd-env-653569" [5e1371d7-f805-4832-9eca-a3c8b427268a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:54:01.131996  426617 system_pods.go:61] "storage-provisioner" [b7231173-9b6d-4885-9b33-ed3fecd300de] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:54:01.132003  426617 system_pods.go:74] duration metric: took 3.598387ms to wait for pod list to return data ...
	I1123 09:54:01.132019  426617 kubeadm.go:587] duration metric: took 1.19118307s to wait for: map[apiserver:true system_pods:true]
	I1123 09:54:01.132049  426617 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:54:01.138900  426617 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:54:01.138936  426617 node_conditions.go:123] node cpu capacity is 2
	I1123 09:54:01.138950  426617 node_conditions.go:105] duration metric: took 6.885507ms to run NodePressure ...
	I1123 09:54:01.138963  426617 start.go:242] waiting for startup goroutines ...
	I1123 09:54:01.596134  426617 kapi.go:214] "coredns" deployment in "kube-system" namespace and "force-systemd-env-653569" context rescaled to 1 replicas
	I1123 09:54:01.596166  426617 start.go:247] waiting for cluster config update ...
	I1123 09:54:01.596189  426617 start.go:256] writing updated cluster config ...
	I1123 09:54:01.596486  426617 ssh_runner.go:195] Run: rm -f paused
	I1123 09:54:01.679420  426617 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:54:01.682760  426617 out.go:179] * Done! kubectl is now configured to use "force-systemd-env-653569" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.189867837Z" level=info msg="Started container" PID=2218 containerID=891c5da2b2cf8e87bd8f24a275f47debf482222a675ab0960fad8dd9ee882ab2 description=kube-system/kindnet-xmfwf/kindnet-cni id=bc12d0d4-b412-4c2c-8943-afcd754c8414 name=/runtime.v1.RuntimeService/StartContainer sandboxID=63bbf5223d7f9fe1d92710d109a7b312186432d7f7eea15503372bd88420dd35
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.227167342Z" level=info msg="Created container 28a462431337106b621a0e2e0ebeda0f9205283900c74c993b5b8f1e0ab5751b: kube-system/kube-scheduler-pause-902289/kube-scheduler" id=6e65dc95-6b24-49f6-be22-276cd07b5cba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.230213548Z" level=info msg="Created container e2c772377726bd8b3beea2efc62f66e9cfb85a568feb56ef5d6c49a797734800: kube-system/coredns-66bc5c9577-94mmp/coredns" id=5b9eaa34-dab3-4e51-b000-151797a23f31 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.233980635Z" level=info msg="Starting container: 28a462431337106b621a0e2e0ebeda0f9205283900c74c993b5b8f1e0ab5751b" id=c47020f4-9eed-4c47-9b8e-9a50a82fc49c name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.234439743Z" level=info msg="Starting container: e2c772377726bd8b3beea2efc62f66e9cfb85a568feb56ef5d6c49a797734800" id=ccc3f007-40a5-4bc2-8ecd-4531a532ec80 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.2616218Z" level=info msg="Created container 656a67f7696253ff07fee5935f113bf2aab9c31a82f76613d0a52bb745cf02e3: kube-system/etcd-pause-902289/etcd" id=8b1ae589-b0dc-4f2f-bee6-fbffdcd569f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.269651693Z" level=info msg="Started container" PID=2221 containerID=28a462431337106b621a0e2e0ebeda0f9205283900c74c993b5b8f1e0ab5751b description=kube-system/kube-scheduler-pause-902289/kube-scheduler id=c47020f4-9eed-4c47-9b8e-9a50a82fc49c name=/runtime.v1.RuntimeService/StartContainer sandboxID=1326de3f641c3a9e7f4a48bc9918cfbb6edd81b06b83c4f9db0295745db9c2da
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.271115796Z" level=info msg="Started container" PID=2230 containerID=e2c772377726bd8b3beea2efc62f66e9cfb85a568feb56ef5d6c49a797734800 description=kube-system/coredns-66bc5c9577-94mmp/coredns id=ccc3f007-40a5-4bc2-8ecd-4531a532ec80 name=/runtime.v1.RuntimeService/StartContainer sandboxID=14b140584fb0895c99d84bac283d37f9a26e754827ad300a12e6c51c42021d2b
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.289893333Z" level=info msg="Starting container: 656a67f7696253ff07fee5935f113bf2aab9c31a82f76613d0a52bb745cf02e3" id=32c9cd29-8380-4591-af79-75797c2184f5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.295331221Z" level=info msg="Started container" PID=2251 containerID=656a67f7696253ff07fee5935f113bf2aab9c31a82f76613d0a52bb745cf02e3 description=kube-system/etcd-pause-902289/etcd id=32c9cd29-8380-4591-af79-75797c2184f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e2b1ac4a2bd4a73405c84f99e50a2572815358c8f0e1d4acf2f3365e6fd81dd
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.723543123Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.728156261Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.728332977Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.728470341Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.732754178Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.73278768Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.73280798Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.739558691Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.739708945Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.73979149Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.743285358Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.743326532Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.743377216Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.748283708Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.748320837Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	656a67f769625       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago       Running             etcd                      1                   1e2b1ac4a2bd4       etcd-pause-902289                      kube-system
	e2c772377726b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   14b140584fb08       coredns-66bc5c9577-94mmp               kube-system
	28a4624313371       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago       Running             kube-scheduler            1                   1326de3f641c3       kube-scheduler-pause-902289            kube-system
	891c5da2b2cf8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   63bbf5223d7f9       kindnet-xmfwf                          kube-system
	301fb617b1f96       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago       Running             kube-controller-manager   1                   1d8bceee68d0e       kube-controller-manager-pause-902289   kube-system
	924727f670675       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   21 seconds ago       Running             kube-proxy                1                   5e175c7eceea5       kube-proxy-55824                       kube-system
	e6a3434aae739       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago       Running             kube-apiserver            1                   c571866863292       kube-apiserver-pause-902289            kube-system
	4bb5df37d1031       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   33 seconds ago       Exited              coredns                   0                   14b140584fb08       coredns-66bc5c9577-94mmp               kube-system
	a4f1980bab92b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   63bbf5223d7f9       kindnet-xmfwf                          kube-system
	21b7e1366f30e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   5e175c7eceea5       kube-proxy-55824                       kube-system
	b6aece6a15713       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   1d8bceee68d0e       kube-controller-manager-pause-902289   kube-system
	00f2d513b8a4e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   1326de3f641c3       kube-scheduler-pause-902289            kube-system
	9f818fc66635c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   c571866863292       kube-apiserver-pause-902289            kube-system
	f9cfd21effcad       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   1e2b1ac4a2bd4       etcd-pause-902289                      kube-system
	
	
	==> coredns [4bb5df37d1031824f5c4150f63585d202677be311760ed8886913f82f675b2d2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60295 - 18510 "HINFO IN 1890666745503102433.2653721831748515458. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037532903s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e2c772377726bd8b3beea2efc62f66e9cfb85a568feb56ef5d6c49a797734800] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49126 - 41370 "HINFO IN 2291029018498211150.3434779713424746718. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027290669s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               pause-902289
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-902289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=pause-902289
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_52_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:52:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-902289
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:53:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:53:43 +0000   Sun, 23 Nov 2025 09:52:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:53:43 +0000   Sun, 23 Nov 2025 09:52:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:53:43 +0000   Sun, 23 Nov 2025 09:52:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:53:43 +0000   Sun, 23 Nov 2025 09:53:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-902289
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                fafaac0a-0e9a-4fa8-99b2-fb29633ef74d
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-94mmp                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     75s
	  kube-system                 etcd-pause-902289                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         84s
	  kube-system                 kindnet-xmfwf                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      75s
	  kube-system                 kube-apiserver-pause-902289             250m (12%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-controller-manager-pause-902289    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-55824                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-pause-902289             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 74s                kube-proxy       
	  Normal   Starting                 12s                kube-proxy       
	  Warning  CgroupV1                 92s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node pause-902289 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node pause-902289 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     92s (x8 over 92s)  kubelet          Node pause-902289 status is now: NodeHasSufficientPID
	  Normal   Starting                 80s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 80s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  80s                kubelet          Node pause-902289 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    80s                kubelet          Node pause-902289 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     80s                kubelet          Node pause-902289 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           76s                node-controller  Node pause-902289 event: Registered Node pause-902289 in Controller
	  Normal   NodeReady                34s                kubelet          Node pause-902289 status is now: NodeReady
	  Normal   RegisteredNode           10s                node-controller  Node pause-902289 event: Registered Node pause-902289 in Controller
	
	
	==> dmesg <==
	[Nov23 09:25] overlayfs: idmapped layers are currently not supported
	[Nov23 09:26] overlayfs: idmapped layers are currently not supported
	[Nov23 09:31] overlayfs: idmapped layers are currently not supported
	[  +4.906932] overlayfs: idmapped layers are currently not supported
	[Nov23 09:32] overlayfs: idmapped layers are currently not supported
	[ +39.649169] overlayfs: idmapped layers are currently not supported
	[Nov23 09:34] overlayfs: idmapped layers are currently not supported
	[Nov23 09:39] overlayfs: idmapped layers are currently not supported
	[ +33.513761] overlayfs: idmapped layers are currently not supported
	[Nov23 09:41] overlayfs: idmapped layers are currently not supported
	[Nov23 09:42] overlayfs: idmapped layers are currently not supported
	[Nov23 09:43] overlayfs: idmapped layers are currently not supported
	[Nov23 09:45] overlayfs: idmapped layers are currently not supported
	[ +17.384674] overlayfs: idmapped layers are currently not supported
	[ +16.809296] overlayfs: idmapped layers are currently not supported
	[Nov23 09:46] overlayfs: idmapped layers are currently not supported
	[ +17.278795] overlayfs: idmapped layers are currently not supported
	[Nov23 09:47] overlayfs: idmapped layers are currently not supported
	[ +12.563591] hrtimer: interrupt took 4093727 ns
	[ +14.190024] overlayfs: idmapped layers are currently not supported
	[Nov23 09:49] overlayfs: idmapped layers are currently not supported
	[Nov23 09:50] overlayfs: idmapped layers are currently not supported
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [656a67f7696253ff07fee5935f113bf2aab9c31a82f76613d0a52bb745cf02e3] <==
	{"level":"warn","ts":"2025-11-23T09:53:45.418740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.534057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.539817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.599889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.637641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.685529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.725397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.766302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.865584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.890825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.938272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.021921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.088428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.139329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.163583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.202255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.243918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.270545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.282747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.332827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.350286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.397811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.427524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.470472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.736666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48082","server-name":"","error":"EOF"}
	
	
	==> etcd [f9cfd21effcadf8269de4c91c08df2b43305336549c8f0bd07926f49473ef1dd] <==
	{"level":"warn","ts":"2025-11-23T09:52:34.671962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:52:34.739854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:52:34.823768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:52:34.860403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:52:34.896015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:52:34.964421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:52:35.254590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34632","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T09:53:32.941728Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-23T09:53:32.941781Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-902289","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-23T09:53:32.941882Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T09:53:33.078873Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T09:53:33.078952Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T09:53:33.078993Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-11-23T09:53:33.079079Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-23T09:53:33.079115Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-23T09:53:33.079112Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T09:53:33.079184Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T09:53:33.079224Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-23T09:53:33.079156Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T09:53:33.079289Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T09:53:33.079320Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T09:53:33.082323Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-23T09:53:33.082403Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T09:53:33.082442Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T09:53:33.082448Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-902289","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 09:54:03 up  2:36,  0 user,  load average: 3.59, 2.25, 1.85
	Linux pause-902289 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [891c5da2b2cf8e87bd8f24a275f47debf482222a675ab0960fad8dd9ee882ab2] <==
	I1123 09:53:41.350533       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:53:41.350740       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:53:41.350881       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:53:41.350893       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:53:41.350903       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:53:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:53:41.723402       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:53:41.723516       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:53:41.723557       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:53:41.739981       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:53:48.650771       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 09:53:48.652706       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 09:53:48.652845       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 09:53:48.652952       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1123 09:53:50.164742       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:53:50.164862       1 metrics.go:72] Registering metrics
	I1123 09:53:50.164963       1 controller.go:711] "Syncing nftables rules"
	I1123 09:53:51.723025       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:53:51.723158       1 main.go:301] handling current node
	I1123 09:54:01.723769       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:54:01.723801       1 main.go:301] handling current node
	
	
	==> kindnet [a4f1980bab92b13afddf2474d8b4b5b8b53f0cd0a64c295106b26ae3db1103af] <==
	I1123 09:52:48.260498       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:52:48.260892       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:52:48.261080       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:52:48.261832       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:52:48.261895       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:52:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:52:48.459720       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:52:48.459750       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:52:48.459759       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:52:48.459875       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:53:18.371436       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 09:53:18.459539       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 09:53:18.459758       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 09:53:18.459896       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 09:53:19.959892       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:53:19.960030       1 metrics.go:72] Registering metrics
	I1123 09:53:19.960140       1 controller.go:711] "Syncing nftables rules"
	I1123 09:53:28.377474       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:53:28.377528       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9f818fc66635cc44eeb47e4207008d4a814ac58d7495df544d7c6550de4cfd40] <==
	W1123 09:53:32.953528       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953600       1 logging.go:55] [core] [Channel #25 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953656       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953719       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953766       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953828       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953890       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953938       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953995       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954053       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954101       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954162       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954215       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954273       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954316       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954379       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954450       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954497       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954549       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954601       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954862       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954908       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.955058       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.955106       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.955174       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e6a3434aae7399365305f02fe70d5f6ea51d903da9bc3be6ddc186ca7434c593] <==
	I1123 09:53:48.838553       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:53:48.838582       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:53:48.857686       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 09:53:48.858041       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:53:48.858126       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:53:48.869246       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:53:48.913863       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:53:48.916593       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:53:48.946035       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:53:48.957534       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 09:53:48.961540       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 09:53:48.957750       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 09:53:48.961766       1 policy_source.go:240] refreshing policies
	I1123 09:53:48.957772       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 09:53:48.969594       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:53:48.957781       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 09:53:48.958342       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1123 09:53:49.000749       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 09:53:49.006730       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:53:49.364599       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:53:50.992940       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:53:52.335791       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:53:52.472110       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:53:52.516086       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:53:52.619296       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [301fb617b1f960338a688814750032b12882ded15eac0506bfd49ddf0934870b] <==
	I1123 09:53:52.260139       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:53:52.269544       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:53:52.269778       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 09:53:52.269851       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:53:52.285426       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:53:52.285733       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 09:53:52.285750       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:53:52.285760       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:53:52.285979       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:53:52.288383       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:53:52.289032       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 09:53:52.292551       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:53:52.295238       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:53:52.304445       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:53:52.309463       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:53:52.309713       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:53:52.310194       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-902289"
	I1123 09:53:52.309596       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:53:52.310362       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:53:52.310390       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:53:52.310476       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:53:52.310318       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 09:53:52.316282       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:53:52.333519       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:53:52.333592       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [b6aece6a157139e982fe1e6ec7e327f9f62fa96ac78ffcbccff18c993426e2a5] <==
	I1123 09:52:46.382882       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:52:46.382997       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:52:46.383013       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:52:46.383048       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:52:46.383166       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:52:46.383364       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:52:46.388903       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 09:52:46.388960       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:52:46.392330       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:52:46.393433       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 09:52:46.393468       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 09:52:46.393489       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 09:52:46.393494       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 09:52:46.393499       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 09:52:46.400501       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:52:46.400703       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:52:46.400879       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:52:46.408543       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:52:46.408651       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:52:46.408707       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:52:46.412230       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:52:46.413203       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-902289" podCIDRs=["10.244.0.0/24"]
	I1123 09:52:46.413499       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:52:46.449246       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:53:31.357625       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [21b7e1366f30e127ba37cbd9bc0a22fc7073ee77f7eb6a86efe280ea69f595b0] <==
	I1123 09:52:48.207592       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:52:48.279370       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:52:48.379525       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:52:48.379633       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 09:52:48.379738       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:52:48.410301       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:52:48.410348       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:52:48.414369       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:52:48.414717       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:52:48.414896       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:52:48.423025       1 config.go:200] "Starting service config controller"
	I1123 09:52:48.423118       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:52:48.423157       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:52:48.423203       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:52:48.423274       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:52:48.423301       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:52:48.425482       1 config.go:309] "Starting node config controller"
	I1123 09:52:48.426391       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:52:48.426476       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:52:48.526963       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:52:48.527070       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:52:48.527321       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [924727f67067568042a015bc3e901ad4ad44c23a740962980e1d770157ccd349] <==
	I1123 09:53:41.118236       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:53:42.458450       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:53:49.465905       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:53:49.465942       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 09:53:49.466001       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:53:50.433171       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:53:50.433291       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:53:50.438371       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:53:50.438775       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:53:50.438851       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:53:50.441449       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:53:50.441471       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:53:50.441777       1 config.go:200] "Starting service config controller"
	I1123 09:53:50.441796       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:53:50.466977       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:53:50.467067       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:53:50.467798       1 config.go:309] "Starting node config controller"
	I1123 09:53:50.467875       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:53:50.467909       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:53:50.542487       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:53:50.542593       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:53:50.567447       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [00f2d513b8a4eeb15f37c571edaf256e1cc41499c119fb407d7ad17fb1c4e582] <==
	E1123 09:52:39.061696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 09:52:39.069070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:52:39.125506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:52:39.164299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:52:39.179467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:52:39.231214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:52:39.289254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:52:39.429436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:52:39.521722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:52:39.561684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:52:39.561799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:52:39.580766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:52:39.598658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:52:39.621037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:52:39.623807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:52:39.689441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:52:39.759069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:52:40.846224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 09:52:46.443103       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:53:32.943511       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1123 09:53:32.943545       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1123 09:53:32.943572       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1123 09:53:32.943621       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:53:32.943625       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1123 09:53:32.943639       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [28a462431337106b621a0e2e0ebeda0f9205283900c74c993b5b8f1e0ab5751b] <==
	I1123 09:53:48.683878       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:53:50.887941       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:53:50.887979       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:53:50.913701       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:53:50.913918       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 09:53:50.913986       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 09:53:50.914066       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:53:50.926920       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:53:50.926952       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:53:50.926971       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:53:50.926979       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:53:51.019083       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 09:53:51.031428       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:53:51.031555       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:53:40 pause-902289 kubelet[1304]: E1123 09:53:40.839738    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55824\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bebd08c2-99f4-4417-a511-ab1014ed8137" pod="kube-system/kube-proxy-55824"
	Nov 23 09:53:40 pause-902289 kubelet[1304]: E1123 09:53:40.840945    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-94mmp\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="60daea3c-96b0-4122-adef-f228835ee2df" pod="kube-system/coredns-66bc5c9577-94mmp"
	Nov 23 09:53:40 pause-902289 kubelet[1304]: E1123 09:53:40.841166    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-902289\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c86b026e29aafe88613d550d63f628df" pod="kube-system/etcd-pause-902289"
	Nov 23 09:53:40 pause-902289 kubelet[1304]: E1123 09:53:40.841326    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-902289\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e8e40c3598d52b0fb8bd400e67498b67" pod="kube-system/kube-apiserver-pause-902289"
	Nov 23 09:53:40 pause-902289 kubelet[1304]: E1123 09:53:40.841781    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-902289\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e31deae83239ff18305f263d53263c80" pod="kube-system/kube-scheduler-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.448778    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-xmfwf\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="dd9e7594-d56b-4dc1-bf62-ab12f3d30214" pod="kube-system/kindnet-xmfwf"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.449701    1304 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-902289\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.450679    1304 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-902289\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.506502    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-55824\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="bebd08c2-99f4-4417-a511-ab1014ed8137" pod="kube-system/kube-proxy-55824"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.556679    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-94mmp\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="60daea3c-96b0-4122-adef-f228835ee2df" pod="kube-system/coredns-66bc5c9577-94mmp"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.577297    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="c86b026e29aafe88613d550d63f628df" pod="kube-system/etcd-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.593650    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="e8e40c3598d52b0fb8bd400e67498b67" pod="kube-system/kube-apiserver-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.612215    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="e31deae83239ff18305f263d53263c80" pod="kube-system/kube-scheduler-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.622896    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="70af69decc0eb3e886481e915ab38a63" pod="kube-system/kube-controller-manager-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.641864    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="70af69decc0eb3e886481e915ab38a63" pod="kube-system/kube-controller-manager-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.650059    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-xmfwf\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="dd9e7594-d56b-4dc1-bf62-ab12f3d30214" pod="kube-system/kindnet-xmfwf"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.652096    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-55824\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="bebd08c2-99f4-4417-a511-ab1014ed8137" pod="kube-system/kube-proxy-55824"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.669935    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-94mmp\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="60daea3c-96b0-4122-adef-f228835ee2df" pod="kube-system/coredns-66bc5c9577-94mmp"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.759996    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="c86b026e29aafe88613d550d63f628df" pod="kube-system/etcd-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.790213    1304 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-902289\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.790738    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="e8e40c3598d52b0fb8bd400e67498b67" pod="kube-system/kube-apiserver-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.815238    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="e31deae83239ff18305f263d53263c80" pod="kube-system/kube-scheduler-pause-902289"
	Nov 23 09:53:59 pause-902289 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:53:59 pause-902289 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:53:59 pause-902289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-902289 -n pause-902289
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-902289 -n pause-902289: exit status 2 (510.441008ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-902289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-902289
helpers_test.go:243: (dbg) docker inspect pause-902289:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10",
	        "Created": "2025-11-23T09:52:09.251636502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 417492,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:52:09.336769907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10/hostname",
	        "HostsPath": "/var/lib/docker/containers/9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10/hosts",
	        "LogPath": "/var/lib/docker/containers/9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10/9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10-json.log",
	        "Name": "/pause-902289",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-902289:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-902289",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9c843fb4ed6a911e5401833037dc7fc7f90714504a39cf8efb88269a5f938a10",
	                "LowerDir": "/var/lib/docker/overlay2/cb22273ab66a6b5ac51dfa8fc1dfb117cbc7c2e11c9338f41068244f7cab27ef-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb22273ab66a6b5ac51dfa8fc1dfb117cbc7c2e11c9338f41068244f7cab27ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb22273ab66a6b5ac51dfa8fc1dfb117cbc7c2e11c9338f41068244f7cab27ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb22273ab66a6b5ac51dfa8fc1dfb117cbc7c2e11c9338f41068244f7cab27ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-902289",
	                "Source": "/var/lib/docker/volumes/pause-902289/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-902289",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-902289",
	                "name.minikube.sigs.k8s.io": "pause-902289",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "64d7250f44a672bd5612604485455fbe232d5c25624dc64a54078f96216efe60",
	            "SandboxKey": "/var/run/docker/netns/64d7250f44a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33349"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33350"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33353"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33351"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33352"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-902289": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:78:51:ee:fb:f8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "542e7a0b0cabf720b7fa60abc18e7d5229e71f392637e7183f5acfb2e30021af",
	                    "EndpointID": "d9d72e7e00275d7a31f50d6ef9ed6dff714a3598613eb5eb7d50188e1c18e66b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-902289",
	                        "9c843fb4ed6a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-902289 -n pause-902289
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-902289 -n pause-902289: exit status 2 (417.134631ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-902289 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-902289 logs -n 25: (1.744311742s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-507563 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl cat docker --no-pager                                                                       │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo cat /etc/docker/daemon.json                                                                           │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo docker system info                                                                                    │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo cri-dockerd --version                                                                                 │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl cat containerd --no-pager                                                                   │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo cat /etc/containerd/config.toml                                                                       │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo containerd config dump                                                                                │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo systemctl cat crio --no-pager                                                                         │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ ssh     │ -p cilium-507563 sudo crio config                                                                                           │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ delete  │ -p cilium-507563                                                                                                            │ cilium-507563             │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │ 23 Nov 25 09:53 UTC │
	│ start   │ -p force-systemd-env-653569 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-653569  │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │ 23 Nov 25 09:54 UTC │
	│ start   │ -p pause-902289 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-902289              │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │ 23 Nov 25 09:53 UTC │
	│ pause   │ -p pause-902289 --alsologtostderr -v=5                                                                                      │ pause-902289              │ jenkins │ v1.37.0 │ 23 Nov 25 09:53 UTC │                     │
	│ delete  │ -p force-systemd-env-653569                                                                                                 │ force-systemd-env-653569  │ jenkins │ v1.37.0 │ 23 Nov 25 09:54 UTC │ 23 Nov 25 09:54 UTC │
	│ start   │ -p force-systemd-flag-692168 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-692168 │ jenkins │ v1.37.0 │ 23 Nov 25 09:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:54:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:54:04.550673  430899 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:54:04.550858  430899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:54:04.550873  430899 out.go:374] Setting ErrFile to fd 2...
	I1123 09:54:04.550879  430899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:54:04.551174  430899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:54:04.551615  430899 out.go:368] Setting JSON to false
	I1123 09:54:04.552596  430899 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9393,"bootTime":1763882251,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 09:54:04.552662  430899 start.go:143] virtualization:  
	I1123 09:54:04.559336  430899 out.go:179] * [force-systemd-flag-692168] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:54:04.563073  430899 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:54:04.563266  430899 notify.go:221] Checking for updates...
	I1123 09:54:04.569952  430899 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:54:04.573434  430899 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:54:04.576893  430899 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 09:54:04.580090  430899 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:54:04.583969  430899 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:54:04.588113  430899 config.go:182] Loaded profile config "pause-902289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:54:04.588230  430899 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:54:04.630655  430899 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:54:04.630771  430899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:54:04.722727  430899 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 09:54:04.710211704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:54:04.722832  430899 docker.go:319] overlay module found
	I1123 09:54:04.726105  430899 out.go:179] * Using the docker driver based on user configuration
	
	
	==> CRI-O <==
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.189867837Z" level=info msg="Started container" PID=2218 containerID=891c5da2b2cf8e87bd8f24a275f47debf482222a675ab0960fad8dd9ee882ab2 description=kube-system/kindnet-xmfwf/kindnet-cni id=bc12d0d4-b412-4c2c-8943-afcd754c8414 name=/runtime.v1.RuntimeService/StartContainer sandboxID=63bbf5223d7f9fe1d92710d109a7b312186432d7f7eea15503372bd88420dd35
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.227167342Z" level=info msg="Created container 28a462431337106b621a0e2e0ebeda0f9205283900c74c993b5b8f1e0ab5751b: kube-system/kube-scheduler-pause-902289/kube-scheduler" id=6e65dc95-6b24-49f6-be22-276cd07b5cba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.230213548Z" level=info msg="Created container e2c772377726bd8b3beea2efc62f66e9cfb85a568feb56ef5d6c49a797734800: kube-system/coredns-66bc5c9577-94mmp/coredns" id=5b9eaa34-dab3-4e51-b000-151797a23f31 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.233980635Z" level=info msg="Starting container: 28a462431337106b621a0e2e0ebeda0f9205283900c74c993b5b8f1e0ab5751b" id=c47020f4-9eed-4c47-9b8e-9a50a82fc49c name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.234439743Z" level=info msg="Starting container: e2c772377726bd8b3beea2efc62f66e9cfb85a568feb56ef5d6c49a797734800" id=ccc3f007-40a5-4bc2-8ecd-4531a532ec80 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.2616218Z" level=info msg="Created container 656a67f7696253ff07fee5935f113bf2aab9c31a82f76613d0a52bb745cf02e3: kube-system/etcd-pause-902289/etcd" id=8b1ae589-b0dc-4f2f-bee6-fbffdcd569f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.269651693Z" level=info msg="Started container" PID=2221 containerID=28a462431337106b621a0e2e0ebeda0f9205283900c74c993b5b8f1e0ab5751b description=kube-system/kube-scheduler-pause-902289/kube-scheduler id=c47020f4-9eed-4c47-9b8e-9a50a82fc49c name=/runtime.v1.RuntimeService/StartContainer sandboxID=1326de3f641c3a9e7f4a48bc9918cfbb6edd81b06b83c4f9db0295745db9c2da
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.271115796Z" level=info msg="Started container" PID=2230 containerID=e2c772377726bd8b3beea2efc62f66e9cfb85a568feb56ef5d6c49a797734800 description=kube-system/coredns-66bc5c9577-94mmp/coredns id=ccc3f007-40a5-4bc2-8ecd-4531a532ec80 name=/runtime.v1.RuntimeService/StartContainer sandboxID=14b140584fb0895c99d84bac283d37f9a26e754827ad300a12e6c51c42021d2b
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.289893333Z" level=info msg="Starting container: 656a67f7696253ff07fee5935f113bf2aab9c31a82f76613d0a52bb745cf02e3" id=32c9cd29-8380-4591-af79-75797c2184f5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:53:41 pause-902289 crio[2093]: time="2025-11-23T09:53:41.295331221Z" level=info msg="Started container" PID=2251 containerID=656a67f7696253ff07fee5935f113bf2aab9c31a82f76613d0a52bb745cf02e3 description=kube-system/etcd-pause-902289/etcd id=32c9cd29-8380-4591-af79-75797c2184f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e2b1ac4a2bd4a73405c84f99e50a2572815358c8f0e1d4acf2f3365e6fd81dd
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.723543123Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.728156261Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.728332977Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.728470341Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.732754178Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.73278768Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.73280798Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.739558691Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.739708945Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.73979149Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.743285358Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.743326532Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.743377216Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.748283708Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:53:51 pause-902289 crio[2093]: time="2025-11-23T09:53:51.748320837Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	656a67f769625       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago       Running             etcd                      1                   1e2b1ac4a2bd4       etcd-pause-902289                      kube-system
	e2c772377726b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   14b140584fb08       coredns-66bc5c9577-94mmp               kube-system
	28a4624313371       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   1326de3f641c3       kube-scheduler-pause-902289            kube-system
	891c5da2b2cf8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   63bbf5223d7f9       kindnet-xmfwf                          kube-system
	301fb617b1f96       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago       Running             kube-controller-manager   1                   1d8bceee68d0e       kube-controller-manager-pause-902289   kube-system
	924727f670675       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   24 seconds ago       Running             kube-proxy                1                   5e175c7eceea5       kube-proxy-55824                       kube-system
	e6a3434aae739       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago       Running             kube-apiserver            1                   c571866863292       kube-apiserver-pause-902289            kube-system
	4bb5df37d1031       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   36 seconds ago       Exited              coredns                   0                   14b140584fb08       coredns-66bc5c9577-94mmp               kube-system
	a4f1980bab92b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   63bbf5223d7f9       kindnet-xmfwf                          kube-system
	21b7e1366f30e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   5e175c7eceea5       kube-proxy-55824                       kube-system
	b6aece6a15713       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   1d8bceee68d0e       kube-controller-manager-pause-902289   kube-system
	00f2d513b8a4e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   1326de3f641c3       kube-scheduler-pause-902289            kube-system
	9f818fc66635c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   c571866863292       kube-apiserver-pause-902289            kube-system
	f9cfd21effcad       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   1e2b1ac4a2bd4       etcd-pause-902289                      kube-system
	
	
	==> coredns [4bb5df37d1031824f5c4150f63585d202677be311760ed8886913f82f675b2d2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60295 - 18510 "HINFO IN 1890666745503102433.2653721831748515458. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037532903s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e2c772377726bd8b3beea2efc62f66e9cfb85a568feb56ef5d6c49a797734800] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49126 - 41370 "HINFO IN 2291029018498211150.3434779713424746718. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027290669s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               pause-902289
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-902289
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=pause-902289
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_52_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:52:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-902289
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:53:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:53:43 +0000   Sun, 23 Nov 2025 09:52:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:53:43 +0000   Sun, 23 Nov 2025 09:52:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:53:43 +0000   Sun, 23 Nov 2025 09:52:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:53:43 +0000   Sun, 23 Nov 2025 09:53:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-902289
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                fafaac0a-0e9a-4fa8-99b2-fb29633ef74d
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-94mmp                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     78s
	  kube-system                 etcd-pause-902289                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         87s
	  kube-system                 kindnet-xmfwf                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-902289             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-pause-902289    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-55824                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-902289             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 77s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Warning  CgroupV1                 95s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  95s (x8 over 95s)  kubelet          Node pause-902289 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    95s (x8 over 95s)  kubelet          Node pause-902289 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     95s (x8 over 95s)  kubelet          Node pause-902289 status is now: NodeHasSufficientPID
	  Normal   Starting                 83s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 83s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node pause-902289 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node pause-902289 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s                kubelet          Node pause-902289 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           79s                node-controller  Node pause-902289 event: Registered Node pause-902289 in Controller
	  Normal   NodeReady                37s                kubelet          Node pause-902289 status is now: NodeReady
	  Normal   RegisteredNode           13s                node-controller  Node pause-902289 event: Registered Node pause-902289 in Controller
	
	
	==> dmesg <==
	[Nov23 09:25] overlayfs: idmapped layers are currently not supported
	[Nov23 09:26] overlayfs: idmapped layers are currently not supported
	[Nov23 09:31] overlayfs: idmapped layers are currently not supported
	[  +4.906932] overlayfs: idmapped layers are currently not supported
	[Nov23 09:32] overlayfs: idmapped layers are currently not supported
	[ +39.649169] overlayfs: idmapped layers are currently not supported
	[Nov23 09:34] overlayfs: idmapped layers are currently not supported
	[Nov23 09:39] overlayfs: idmapped layers are currently not supported
	[ +33.513761] overlayfs: idmapped layers are currently not supported
	[Nov23 09:41] overlayfs: idmapped layers are currently not supported
	[Nov23 09:42] overlayfs: idmapped layers are currently not supported
	[Nov23 09:43] overlayfs: idmapped layers are currently not supported
	[Nov23 09:45] overlayfs: idmapped layers are currently not supported
	[ +17.384674] overlayfs: idmapped layers are currently not supported
	[ +16.809296] overlayfs: idmapped layers are currently not supported
	[Nov23 09:46] overlayfs: idmapped layers are currently not supported
	[ +17.278795] overlayfs: idmapped layers are currently not supported
	[Nov23 09:47] overlayfs: idmapped layers are currently not supported
	[ +12.563591] hrtimer: interrupt took 4093727 ns
	[ +14.190024] overlayfs: idmapped layers are currently not supported
	[Nov23 09:49] overlayfs: idmapped layers are currently not supported
	[Nov23 09:50] overlayfs: idmapped layers are currently not supported
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [656a67f7696253ff07fee5935f113bf2aab9c31a82f76613d0a52bb745cf02e3] <==
	{"level":"warn","ts":"2025-11-23T09:53:45.418740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.534057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.539817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.599889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.637641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.685529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.725397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.766302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.865584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.890825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:45.938272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.021921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.088428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.139329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.163583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.202255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.243918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.270545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.282747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.332827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.350286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.397811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.427524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.470472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:53:46.736666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48082","server-name":"","error":"EOF"}
	
	
	==> etcd [f9cfd21effcadf8269de4c91c08df2b43305336549c8f0bd07926f49473ef1dd] <==
	{"level":"warn","ts":"2025-11-23T09:52:34.671962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:52:34.739854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:52:34.823768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:52:34.860403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:52:34.896015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:52:34.964421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:52:35.254590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34632","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T09:53:32.941728Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-23T09:53:32.941781Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-902289","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-11-23T09:53:32.941882Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T09:53:33.078873Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T09:53:33.078952Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T09:53:33.078993Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-11-23T09:53:33.079079Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-23T09:53:33.079115Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-23T09:53:33.079112Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T09:53:33.079184Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T09:53:33.079224Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-23T09:53:33.079156Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T09:53:33.079289Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T09:53:33.079320Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T09:53:33.082323Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-11-23T09:53:33.082403Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T09:53:33.082442Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T09:53:33.082448Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-902289","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 09:54:05 up  2:36,  0 user,  load average: 3.62, 2.28, 1.87
	Linux pause-902289 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [891c5da2b2cf8e87bd8f24a275f47debf482222a675ab0960fad8dd9ee882ab2] <==
	I1123 09:53:41.350533       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:53:41.350740       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:53:41.350881       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:53:41.350893       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:53:41.350903       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:53:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:53:41.723402       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:53:41.723516       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:53:41.723557       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:53:41.739981       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:53:48.650771       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 09:53:48.652706       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 09:53:48.652845       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 09:53:48.652952       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1123 09:53:50.164742       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:53:50.164862       1 metrics.go:72] Registering metrics
	I1123 09:53:50.164963       1 controller.go:711] "Syncing nftables rules"
	I1123 09:53:51.723025       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:53:51.723158       1 main.go:301] handling current node
	I1123 09:54:01.723769       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:54:01.723801       1 main.go:301] handling current node
	
	
	==> kindnet [a4f1980bab92b13afddf2474d8b4b5b8b53f0cd0a64c295106b26ae3db1103af] <==
	I1123 09:52:48.260498       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:52:48.260892       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:52:48.261080       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:52:48.261832       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:52:48.261895       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:52:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:52:48.459720       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:52:48.459750       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:52:48.459759       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:52:48.459875       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:53:18.371436       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 09:53:18.459539       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 09:53:18.459758       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 09:53:18.459896       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 09:53:19.959892       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:53:19.960030       1 metrics.go:72] Registering metrics
	I1123 09:53:19.960140       1 controller.go:711] "Syncing nftables rules"
	I1123 09:53:28.377474       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:53:28.377528       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9f818fc66635cc44eeb47e4207008d4a814ac58d7495df544d7c6550de4cfd40] <==
	W1123 09:53:32.953528       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953600       1 logging.go:55] [core] [Channel #25 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953656       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953719       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953766       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953828       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953890       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953938       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.953995       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954053       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954101       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954162       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954215       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954273       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954316       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954379       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954450       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954497       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954549       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954601       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954862       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.954908       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.955058       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.955106       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1123 09:53:32.955174       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e6a3434aae7399365305f02fe70d5f6ea51d903da9bc3be6ddc186ca7434c593] <==
	I1123 09:53:48.838553       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:53:48.838582       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:53:48.857686       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 09:53:48.858041       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:53:48.858126       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:53:48.869246       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:53:48.913863       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:53:48.916593       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:53:48.946035       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:53:48.957534       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 09:53:48.961540       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 09:53:48.957750       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 09:53:48.961766       1 policy_source.go:240] refreshing policies
	I1123 09:53:48.957772       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 09:53:48.969594       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:53:48.957781       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 09:53:48.958342       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1123 09:53:49.000749       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 09:53:49.006730       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:53:49.364599       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:53:50.992940       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:53:52.335791       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:53:52.472110       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:53:52.516086       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:53:52.619296       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [301fb617b1f960338a688814750032b12882ded15eac0506bfd49ddf0934870b] <==
	I1123 09:53:52.260139       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:53:52.269544       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:53:52.269778       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 09:53:52.269851       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:53:52.285426       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:53:52.285733       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 09:53:52.285750       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:53:52.285760       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:53:52.285979       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:53:52.288383       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:53:52.289032       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 09:53:52.292551       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:53:52.295238       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:53:52.304445       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:53:52.309463       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:53:52.309713       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:53:52.310194       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-902289"
	I1123 09:53:52.309596       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:53:52.310362       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:53:52.310390       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:53:52.310476       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:53:52.310318       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 09:53:52.316282       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:53:52.333519       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:53:52.333592       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [b6aece6a157139e982fe1e6ec7e327f9f62fa96ac78ffcbccff18c993426e2a5] <==
	I1123 09:52:46.382882       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:52:46.382997       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:52:46.383013       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:52:46.383048       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:52:46.383166       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:52:46.383364       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:52:46.388903       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 09:52:46.388960       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:52:46.392330       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:52:46.393433       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 09:52:46.393468       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 09:52:46.393489       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 09:52:46.393494       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 09:52:46.393499       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 09:52:46.400501       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:52:46.400703       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:52:46.400879       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:52:46.408543       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:52:46.408651       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:52:46.408707       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:52:46.412230       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:52:46.413203       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-902289" podCIDRs=["10.244.0.0/24"]
	I1123 09:52:46.413499       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:52:46.449246       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:53:31.357625       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [21b7e1366f30e127ba37cbd9bc0a22fc7073ee77f7eb6a86efe280ea69f595b0] <==
	I1123 09:52:48.207592       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:52:48.279370       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:52:48.379525       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:52:48.379633       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 09:52:48.379738       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:52:48.410301       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:52:48.410348       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:52:48.414369       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:52:48.414717       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:52:48.414896       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:52:48.423025       1 config.go:200] "Starting service config controller"
	I1123 09:52:48.423118       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:52:48.423157       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:52:48.423203       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:52:48.423274       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:52:48.423301       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:52:48.425482       1 config.go:309] "Starting node config controller"
	I1123 09:52:48.426391       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:52:48.426476       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:52:48.526963       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:52:48.527070       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:52:48.527321       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [924727f67067568042a015bc3e901ad4ad44c23a740962980e1d770157ccd349] <==
	I1123 09:53:41.118236       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:53:42.458450       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:53:49.465905       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:53:49.465942       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 09:53:49.466001       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:53:50.433171       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:53:50.433291       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:53:50.438371       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:53:50.438775       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:53:50.438851       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:53:50.441449       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:53:50.441471       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:53:50.441777       1 config.go:200] "Starting service config controller"
	I1123 09:53:50.441796       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:53:50.466977       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:53:50.467067       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:53:50.467798       1 config.go:309] "Starting node config controller"
	I1123 09:53:50.467875       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:53:50.467909       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:53:50.542487       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:53:50.542593       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:53:50.567447       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [00f2d513b8a4eeb15f37c571edaf256e1cc41499c119fb407d7ad17fb1c4e582] <==
	E1123 09:52:39.061696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 09:52:39.069070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:52:39.125506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:52:39.164299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:52:39.179467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:52:39.231214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:52:39.289254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:52:39.429436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:52:39.521722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:52:39.561684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:52:39.561799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:52:39.580766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:52:39.598658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:52:39.621037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:52:39.623807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:52:39.689441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:52:39.759069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:52:40.846224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 09:52:46.443103       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:53:32.943511       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1123 09:53:32.943545       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1123 09:53:32.943572       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1123 09:53:32.943621       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:53:32.943625       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1123 09:53:32.943639       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [28a462431337106b621a0e2e0ebeda0f9205283900c74c993b5b8f1e0ab5751b] <==
	I1123 09:53:48.683878       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:53:50.887941       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:53:50.887979       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:53:50.913701       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:53:50.913918       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 09:53:50.913986       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 09:53:50.914066       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:53:50.926920       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:53:50.926952       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:53:50.926971       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:53:50.926979       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:53:51.019083       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 09:53:51.031428       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:53:51.031555       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:53:40 pause-902289 kubelet[1304]: E1123 09:53:40.839738    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55824\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bebd08c2-99f4-4417-a511-ab1014ed8137" pod="kube-system/kube-proxy-55824"
	Nov 23 09:53:40 pause-902289 kubelet[1304]: E1123 09:53:40.840945    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-94mmp\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="60daea3c-96b0-4122-adef-f228835ee2df" pod="kube-system/coredns-66bc5c9577-94mmp"
	Nov 23 09:53:40 pause-902289 kubelet[1304]: E1123 09:53:40.841166    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-902289\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c86b026e29aafe88613d550d63f628df" pod="kube-system/etcd-pause-902289"
	Nov 23 09:53:40 pause-902289 kubelet[1304]: E1123 09:53:40.841326    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-902289\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e8e40c3598d52b0fb8bd400e67498b67" pod="kube-system/kube-apiserver-pause-902289"
	Nov 23 09:53:40 pause-902289 kubelet[1304]: E1123 09:53:40.841781    1304 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-902289\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e31deae83239ff18305f263d53263c80" pod="kube-system/kube-scheduler-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.448778    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-xmfwf\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="dd9e7594-d56b-4dc1-bf62-ab12f3d30214" pod="kube-system/kindnet-xmfwf"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.449701    1304 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-902289\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.450679    1304 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-902289\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.506502    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-55824\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="bebd08c2-99f4-4417-a511-ab1014ed8137" pod="kube-system/kube-proxy-55824"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.556679    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-94mmp\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="60daea3c-96b0-4122-adef-f228835ee2df" pod="kube-system/coredns-66bc5c9577-94mmp"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.577297    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="c86b026e29aafe88613d550d63f628df" pod="kube-system/etcd-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.593650    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="e8e40c3598d52b0fb8bd400e67498b67" pod="kube-system/kube-apiserver-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.612215    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="e31deae83239ff18305f263d53263c80" pod="kube-system/kube-scheduler-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.622896    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="70af69decc0eb3e886481e915ab38a63" pod="kube-system/kube-controller-manager-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.641864    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="70af69decc0eb3e886481e915ab38a63" pod="kube-system/kube-controller-manager-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.650059    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-xmfwf\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="dd9e7594-d56b-4dc1-bf62-ab12f3d30214" pod="kube-system/kindnet-xmfwf"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.652096    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-55824\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="bebd08c2-99f4-4417-a511-ab1014ed8137" pod="kube-system/kube-proxy-55824"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.669935    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-94mmp\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="60daea3c-96b0-4122-adef-f228835ee2df" pod="kube-system/coredns-66bc5c9577-94mmp"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.759996    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="c86b026e29aafe88613d550d63f628df" pod="kube-system/etcd-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.790213    1304 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-902289\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.790738    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="e8e40c3598d52b0fb8bd400e67498b67" pod="kube-system/kube-apiserver-pause-902289"
	Nov 23 09:53:48 pause-902289 kubelet[1304]: E1123 09:53:48.815238    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-902289\" is forbidden: User \"system:node:pause-902289\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-902289' and this object" podUID="e31deae83239ff18305f263d53263c80" pod="kube-system/kube-scheduler-pause-902289"
	Nov 23 09:53:59 pause-902289 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:53:59 pause-902289 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:53:59 pause-902289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-902289 -n pause-902289
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-902289 -n pause-902289: exit status 2 (436.30412ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-902289 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-706028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-706028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (310.721108ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:09:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-706028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-706028 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-706028 describe deploy/metrics-server -n kube-system: exit status 1 (118.318735ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-706028 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-706028
helpers_test.go:243: (dbg) docker inspect old-k8s-version-706028:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5",
	        "Created": "2025-11-23T10:08:00.027667236Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501134,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:08:00.185726243Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/hosts",
	        "LogPath": "/var/lib/docker/containers/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5-json.log",
	        "Name": "/old-k8s-version-706028",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-706028:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-706028",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5",
	                "LowerDir": "/var/lib/docker/overlay2/4fc786c1031046370668829710493e9535cd397f4cc7ed5d9f51a091e2219a9e-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4fc786c1031046370668829710493e9535cd397f4cc7ed5d9f51a091e2219a9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4fc786c1031046370668829710493e9535cd397f4cc7ed5d9f51a091e2219a9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4fc786c1031046370668829710493e9535cd397f4cc7ed5d9f51a091e2219a9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-706028",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-706028/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-706028",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-706028",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-706028",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3e5b0eb392fff268e22881c8a28a4a8170d8ef3235dcc7cb2d6ef6994fa9a17",
	            "SandboxKey": "/var/run/docker/netns/e3e5b0eb392f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-706028": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:95:18:ef:c0:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "38827229c06574d77dd6a72b1084a1de5267d818d9a4bc2e2e69c7834d9baf50",
	                    "EndpointID": "3d921eb21e46108921f0727d91e4da7094d255bb0a386cf2cb9153a4456573c3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-706028",
	                        "ec71fb4cb0c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706028 -n old-k8s-version-706028
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-706028 logs -n 25
E1123 10:09:10.354324  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/auto-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-706028 logs -n 25: (1.541759069s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-507563 sudo systemctl cat kubelet --no-pager                                                                                                  │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                   │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /etc/kubernetes/kubelet.conf                                                                                                  │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /var/lib/kubelet/config.yaml                                                                                                  │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl status docker --all --full --no-pager                                                                                   │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo systemctl cat docker --no-pager                                                                                                   │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /etc/docker/daemon.json                                                                                                       │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo docker system info                                                                                                                │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo systemctl status cri-docker --all --full --no-pager                                                                               │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo systemctl cat cri-docker --no-pager                                                                                               │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                          │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                    │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cri-dockerd --version                                                                                                             │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl status containerd --all --full --no-pager                                                                               │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo systemctl cat containerd --no-pager                                                                                               │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /lib/systemd/system/containerd.service                                                                                        │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /etc/containerd/config.toml                                                                                                   │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo containerd config dump                                                                                                            │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl status crio --all --full --no-pager                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl cat crio --no-pager                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                           │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo crio config                                                                                                                       │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ delete  │ -p calico-507563                                                                                                                                        │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-706028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain            │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:09:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:09:01.316952  507023 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:09:01.317268  507023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:09:01.317298  507023 out.go:374] Setting ErrFile to fd 2...
	I1123 10:09:01.317317  507023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:09:01.317730  507023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:09:01.318364  507023 out.go:368] Setting JSON to false
	I1123 10:09:01.319731  507023 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10290,"bootTime":1763882251,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:09:01.319844  507023 start.go:143] virtualization:  
	I1123 10:09:01.323903  507023 out.go:179] * [no-preload-020224] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:09:01.328857  507023 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:09:01.328963  507023 notify.go:221] Checking for updates...
	I1123 10:09:01.335959  507023 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:09:01.339525  507023 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:09:01.342807  507023 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:09:01.346042  507023 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:09:01.349620  507023 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:09:01.353432  507023 config.go:182] Loaded profile config "old-k8s-version-706028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:09:01.353532  507023 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:09:01.386913  507023 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:09:01.387079  507023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:09:01.443346  507023 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:09:01.433609274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:09:01.443451  507023 docker.go:319] overlay module found
	I1123 10:09:01.446873  507023 out.go:179] * Using the docker driver based on user configuration
	I1123 10:09:01.449946  507023 start.go:309] selected driver: docker
	I1123 10:09:01.449970  507023 start.go:927] validating driver "docker" against <nil>
	I1123 10:09:01.449984  507023 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:09:01.450705  507023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:09:01.511360  507023 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:09:01.501825466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:09:01.511517  507023 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:09:01.511752  507023 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:09:01.515064  507023 out.go:179] * Using Docker driver with root privileges
	I1123 10:09:01.518085  507023 cni.go:84] Creating CNI manager for ""
	I1123 10:09:01.518169  507023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:09:01.518185  507023 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:09:01.518269  507023 start.go:353] cluster config:
	{Name:no-preload-020224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:09:01.521390  507023 out.go:179] * Starting "no-preload-020224" primary control-plane node in "no-preload-020224" cluster
	I1123 10:09:01.524294  507023 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:09:01.527273  507023 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:09:01.530249  507023 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:09:01.530504  507023 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/config.json ...
	I1123 10:09:01.530541  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/config.json: {Name:mkf0e9d6fdd838602fdd6ea7b7e84f7fa33a6251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:01.530783  507023 cache.go:107] acquiring lock: {Name:mk85a7ea341b7b22f7144b443067338b93f1733a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:09:01.530834  507023 cache.go:115] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 10:09:01.530971  507023 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 134.181µs
	I1123 10:09:01.530992  507023 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 10:09:01.531037  507023 cache.go:107] acquiring lock: {Name:mk6dbb06f379574109993e0f18706986a896189d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:09:01.531475  507023 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 10:09:01.532062  507023 cache.go:107] acquiring lock: {Name:mkf85ca10e1c40480156040157763a03d84ef922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:09:01.532165  507023 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 10:09:01.532629  507023 cache.go:107] acquiring lock: {Name:mka916dc9fc4585e18fed462a4e6c4c2236e466b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:09:01.532726  507023 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 10:09:01.532987  507023 cache.go:107] acquiring lock: {Name:mkaa5c4da3e01760d2e809ef3deba3927b072661 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:09:01.533138  507023 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 10:09:01.530311  507023 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:09:01.533558  507023 cache.go:107] acquiring lock: {Name:mk0a81679e590fdd4a9198b9f7bcc6fd7b402dd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:09:01.534039  507023 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1123 10:09:01.535685  507023 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 10:09:01.536139  507023 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 10:09:01.536344  507023 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 10:09:01.533596  507023 cache.go:107] acquiring lock: {Name:mk4b36753df55ff24d49ddb99313394a283546fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:09:01.536785  507023 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1123 10:09:01.533614  507023 cache.go:107] acquiring lock: {Name:mk5e8535a6036e26b37940c711fe2645a974c77b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:09:01.537610  507023 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 10:09:01.539342  507023 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1123 10:09:01.539998  507023 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1123 10:09:01.540268  507023 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 10:09:01.541502  507023 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 10:09:01.586354  507023 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:09:01.586376  507023 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:09:01.586396  507023 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:09:01.586429  507023 start.go:360] acquireMachinesLock for no-preload-020224: {Name:mk7ef0b074cfea77847aa1186cdbc84a0a684281 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:09:01.586557  507023 start.go:364] duration metric: took 101.442µs to acquireMachinesLock for "no-preload-020224"
	I1123 10:09:01.586584  507023 start.go:93] Provisioning new machine with config: &{Name:no-preload-020224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:09:01.586662  507023 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:09:01.591153  507023 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:09:01.591513  507023 start.go:159] libmachine.API.Create for "no-preload-020224" (driver="docker")
	I1123 10:09:01.591893  507023 client.go:173] LocalClient.Create starting
	I1123 10:09:01.592087  507023 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem
	I1123 10:09:01.592152  507023 main.go:143] libmachine: Decoding PEM data...
	I1123 10:09:01.592196  507023 main.go:143] libmachine: Parsing certificate...
	I1123 10:09:01.593141  507023 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem
	I1123 10:09:01.593212  507023 main.go:143] libmachine: Decoding PEM data...
	I1123 10:09:01.593244  507023 main.go:143] libmachine: Parsing certificate...
	I1123 10:09:01.594144  507023 cli_runner.go:164] Run: docker network inspect no-preload-020224 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:09:01.623330  507023 cli_runner.go:211] docker network inspect no-preload-020224 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:09:01.623446  507023 network_create.go:284] running [docker network inspect no-preload-020224] to gather additional debugging logs...
	I1123 10:09:01.623470  507023 cli_runner.go:164] Run: docker network inspect no-preload-020224
	W1123 10:09:01.645009  507023 cli_runner.go:211] docker network inspect no-preload-020224 returned with exit code 1
	I1123 10:09:01.645038  507023 network_create.go:287] error running [docker network inspect no-preload-020224]: docker network inspect no-preload-020224: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-020224 not found
	I1123 10:09:01.645053  507023 network_create.go:289] output of [docker network inspect no-preload-020224]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-020224 not found
	
	** /stderr **
	I1123 10:09:01.645161  507023 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:09:01.669577  507023 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d56166f18c3a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:f2:0f:1a:18:9c} reservation:<nil>}
	I1123 10:09:01.669950  507023 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe6f7fd59576 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:8b:f7:8e:2b:59} reservation:<nil>}
	I1123 10:09:01.670185  507023 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c262e08021b1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:16:63:f0:32:b6} reservation:<nil>}
	I1123 10:09:01.670466  507023 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-38827229c065 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:05:0d:18:9a:e1} reservation:<nil>}
	I1123 10:09:01.670889  507023 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001cdf430}
	I1123 10:09:01.670911  507023 network_create.go:124] attempt to create docker network no-preload-020224 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 10:09:01.670971  507023 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-020224 no-preload-020224
	I1123 10:09:01.759761  507023 network_create.go:108] docker network no-preload-020224 192.168.85.0/24 created
	I1123 10:09:01.759791  507023 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-020224" container
	I1123 10:09:01.759860  507023 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:09:01.780303  507023 cli_runner.go:164] Run: docker volume create no-preload-020224 --label name.minikube.sigs.k8s.io=no-preload-020224 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:09:01.814239  507023 oci.go:103] Successfully created a docker volume no-preload-020224
	I1123 10:09:01.814340  507023 cli_runner.go:164] Run: docker run --rm --name no-preload-020224-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-020224 --entrypoint /usr/bin/test -v no-preload-020224:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:09:01.890279  507023 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1123 10:09:01.896553  507023 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1123 10:09:01.923968  507023 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1123 10:09:01.924330  507023 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1123 10:09:01.946273  507023 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1123 10:09:01.993588  507023 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1123 10:09:02.011520  507023 cache.go:157] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1123 10:09:02.011553  507023 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 478.000145ms
	I1123 10:09:02.011575  507023 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 10:09:02.013443  507023 cache.go:162] opening:  /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1123 10:09:02.374447  507023 cache.go:157] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 10:09:02.374523  507023 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 841.540019ms
	I1123 10:09:02.374557  507023 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 10:09:02.548222  507023 oci.go:107] Successfully prepared a docker volume no-preload-020224
	I1123 10:09:02.548282  507023 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1123 10:09:02.548426  507023 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 10:09:02.548530  507023 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:09:02.616397  507023 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-020224 --name no-preload-020224 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-020224 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-020224 --network no-preload-020224 --ip 192.168.85.2 --volume no-preload-020224:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:09:02.872323  507023 cache.go:157] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 10:09:02.872369  507023 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.338755237s
	I1123 10:09:02.872385  507023 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 10:09:02.903530  507023 cache.go:157] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 10:09:02.908647  507023 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.376018347s
	I1123 10:09:02.908692  507023 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 10:09:02.937853  507023 cache.go:157] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 10:09:02.937882  507023 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.40582617s
	I1123 10:09:02.937895  507023 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 10:09:03.112081  507023 cache.go:157] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 10:09:03.112113  507023 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.581078226s
	I1123 10:09:03.112152  507023 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 10:09:03.136872  507023 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Running}}
	I1123 10:09:03.175277  507023 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:09:03.202634  507023 cli_runner.go:164] Run: docker exec no-preload-020224 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:09:03.253470  507023 oci.go:144] the created container "no-preload-020224" has a running status.
	I1123 10:09:03.253499  507023 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa...
	I1123 10:09:03.348886  507023 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:09:03.384889  507023 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:09:03.407110  507023 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:09:03.407129  507023 kic_runner.go:114] Args: [docker exec --privileged no-preload-020224 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:09:03.463150  507023 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:09:03.488640  507023 machine.go:94] provisionDockerMachine start ...
	I1123 10:09:03.488725  507023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:09:03.515747  507023 main.go:143] libmachine: Using SSH client type: native
	I1123 10:09:03.516142  507023 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1123 10:09:03.516170  507023 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:09:03.516802  507023 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:09:04.156504  507023 cache.go:157] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 10:09:04.156533  507023 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.622948715s
	I1123 10:09:04.156596  507023 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 10:09:04.156621  507023 cache.go:87] Successfully saved all images to host disk.
	
	
	==> CRI-O <==
	Nov 23 10:08:54 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:54.499475655Z" level=info msg="Created container 74b96dd0ddbceda6a4e77c5b71d5e140300b49c5e93d9af14c62b8d9abc99cd6: kube-system/coredns-5dd5756b68-h6b8n/coredns" id=836ccdc5-538e-4402-9ebf-c772cf8add6d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:08:54 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:54.500738569Z" level=info msg="Starting container: 74b96dd0ddbceda6a4e77c5b71d5e140300b49c5e93d9af14c62b8d9abc99cd6" id=648015d0-f042-4f34-a3a7-cd93ac22e3a3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:08:54 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:54.520779517Z" level=info msg="Started container" PID=1904 containerID=74b96dd0ddbceda6a4e77c5b71d5e140300b49c5e93d9af14c62b8d9abc99cd6 description=kube-system/coredns-5dd5756b68-h6b8n/coredns id=648015d0-f042-4f34-a3a7-cd93ac22e3a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=194ce1c4a0963bbf62d53a6fc94de0e2d98acf9c2ebe8644842902db85cc95ce
	Nov 23 10:08:59 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:59.75969926Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9755580a-69d3-4810-9124-9531180a310c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:08:59 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:59.759780927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:08:59 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:59.778385762Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4da1830bbcc9fd9c92ea8e9be392bbdd6e9b8f444b29e2f935158282a7c80249 UID:3d8762ee-c527-4c0e-9d25-4aa79457ae6b NetNS:/var/run/netns/b7d05678-ba0f-4991-91d0-3ca50eea5b25 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40017ac578}] Aliases:map[]}"
	Nov 23 10:08:59 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:59.778442805Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 10:08:59 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:59.787578705Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4da1830bbcc9fd9c92ea8e9be392bbdd6e9b8f444b29e2f935158282a7c80249 UID:3d8762ee-c527-4c0e-9d25-4aa79457ae6b NetNS:/var/run/netns/b7d05678-ba0f-4991-91d0-3ca50eea5b25 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40017ac578}] Aliases:map[]}"
	Nov 23 10:08:59 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:59.787723831Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 10:08:59 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:59.793078478Z" level=info msg="Ran pod sandbox 4da1830bbcc9fd9c92ea8e9be392bbdd6e9b8f444b29e2f935158282a7c80249 with infra container: default/busybox/POD" id=9755580a-69d3-4810-9124-9531180a310c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:08:59 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:59.794168976Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e88ff179-f1ea-4889-81df-1eb09633f61f name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:08:59 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:59.794372622Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e88ff179-f1ea-4889-81df-1eb09633f61f name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:08:59 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:59.79447934Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e88ff179-f1ea-4889-81df-1eb09633f61f name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:08:59 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:59.795211694Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bb017572-e6a2-479e-88d8-a9439826065c name=/runtime.v1.ImageService/PullImage
	Nov 23 10:08:59 old-k8s-version-706028 crio[838]: time="2025-11-23T10:08:59.797615002Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:09:01 old-k8s-version-706028 crio[838]: time="2025-11-23T10:09:01.906961616Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=bb017572-e6a2-479e-88d8-a9439826065c name=/runtime.v1.ImageService/PullImage
	Nov 23 10:09:01 old-k8s-version-706028 crio[838]: time="2025-11-23T10:09:01.908775351Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=709043ac-5d9e-4115-ac20-1561e8f0f780 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:09:01 old-k8s-version-706028 crio[838]: time="2025-11-23T10:09:01.912124383Z" level=info msg="Creating container: default/busybox/busybox" id=1e458991-ae45-45a9-9453-6af1ecd0a9ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:09:01 old-k8s-version-706028 crio[838]: time="2025-11-23T10:09:01.91241489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:09:01 old-k8s-version-706028 crio[838]: time="2025-11-23T10:09:01.936974981Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:09:01 old-k8s-version-706028 crio[838]: time="2025-11-23T10:09:01.938271045Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:09:01 old-k8s-version-706028 crio[838]: time="2025-11-23T10:09:01.966972206Z" level=info msg="Created container 633f684ffba3b45786dfdc52eead10ff77638b2e0b55606243899db008dd4888: default/busybox/busybox" id=1e458991-ae45-45a9-9453-6af1ecd0a9ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:09:01 old-k8s-version-706028 crio[838]: time="2025-11-23T10:09:01.969920837Z" level=info msg="Starting container: 633f684ffba3b45786dfdc52eead10ff77638b2e0b55606243899db008dd4888" id=6755a358-b1ef-4931-8ac7-2bb270cc558e name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:09:01 old-k8s-version-706028 crio[838]: time="2025-11-23T10:09:01.991481268Z" level=info msg="Started container" PID=1962 containerID=633f684ffba3b45786dfdc52eead10ff77638b2e0b55606243899db008dd4888 description=default/busybox/busybox id=6755a358-b1ef-4931-8ac7-2bb270cc558e name=/runtime.v1.RuntimeService/StartContainer sandboxID=4da1830bbcc9fd9c92ea8e9be392bbdd6e9b8f444b29e2f935158282a7c80249
	Nov 23 10:09:08 old-k8s-version-706028 crio[838]: time="2025-11-23T10:09:08.808568997Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	633f684ffba3b       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   4da1830bbcc9f       busybox                                          default
	74b96dd0ddbce       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      16 seconds ago      Running             coredns                   0                   194ce1c4a0963       coredns-5dd5756b68-h6b8n                         kube-system
	f3373b10e1991       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      16 seconds ago      Running             storage-provisioner       0                   6df5a63ff17e0       storage-provisioner                              kube-system
	a1064f619b206       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    27 seconds ago      Running             kindnet-cni               0                   2e21ee7064173       kindnet-6l8w5                                    kube-system
	944db13d51806       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      30 seconds ago      Running             kube-proxy                0                   9f1f58f32af62       kube-proxy-s9rqv                                 kube-system
	bfee4a2cab832       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      51 seconds ago      Running             kube-apiserver            0                   102b7e6a5ce08       kube-apiserver-old-k8s-version-706028            kube-system
	c5123a17625b5       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      52 seconds ago      Running             kube-scheduler            0                   7bbacc98308db       kube-scheduler-old-k8s-version-706028            kube-system
	13afdc5449ace       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      52 seconds ago      Running             kube-controller-manager   0                   2143ffd8234b2       kube-controller-manager-old-k8s-version-706028   kube-system
	6f89fc040f99d       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      52 seconds ago      Running             etcd                      0                   d25e8e8641c49       etcd-old-k8s-version-706028                      kube-system
	
	
	==> coredns [74b96dd0ddbceda6a4e77c5b71d5e140300b49c5e93d9af14c62b8d9abc99cd6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47925 - 4893 "HINFO IN 4343248229139191694.1398402251021696834. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013851258s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-706028
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-706028
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=old-k8s-version-706028
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_08_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:08:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-706028
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:09:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:08:58 +0000   Sun, 23 Nov 2025 10:08:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:08:58 +0000   Sun, 23 Nov 2025 10:08:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:08:58 +0000   Sun, 23 Nov 2025 10:08:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:08:58 +0000   Sun, 23 Nov 2025 10:08:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-706028
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                1d4707fe-e85e-433b-aa40-17ce9a4af156
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-h6b8n                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-old-k8s-version-706028                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         43s
	  kube-system                 kindnet-6l8w5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-old-k8s-version-706028             250m (12%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-706028    200m (10%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-s9rqv                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-old-k8s-version-706028             100m (5%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 30s                kube-proxy       
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node old-k8s-version-706028 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientPID
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s                kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s                kubelet          Node old-k8s-version-706028 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s                kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                node-controller  Node old-k8s-version-706028 event: Registered Node old-k8s-version-706028 in Controller
	  Normal  NodeReady                16s                kubelet          Node old-k8s-version-706028 status is now: NodeReady
	
	
	==> dmesg <==
	[ +17.384674] overlayfs: idmapped layers are currently not supported
	[ +16.809296] overlayfs: idmapped layers are currently not supported
	[Nov23 09:46] overlayfs: idmapped layers are currently not supported
	[ +17.278795] overlayfs: idmapped layers are currently not supported
	[Nov23 09:47] overlayfs: idmapped layers are currently not supported
	[ +12.563591] hrtimer: interrupt took 4093727 ns
	[ +14.190024] overlayfs: idmapped layers are currently not supported
	[Nov23 09:49] overlayfs: idmapped layers are currently not supported
	[Nov23 09:50] overlayfs: idmapped layers are currently not supported
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6f89fc040f99ddf19a1b358de4f4ff2d6225ed8a71b53eaeedc5b5433aec245f] <==
	{"level":"info","ts":"2025-11-23T10:08:18.832159Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T10:08:18.832281Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T10:08:18.812444Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T10:08:18.832736Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T10:08:18.832806Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T10:08:18.833608Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T10:08:18.83367Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T10:08:19.626953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-23T10:08:19.627063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-23T10:08:19.627115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-23T10:08:19.627154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-23T10:08:19.627185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T10:08:19.627233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-23T10:08:19.627266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T10:08:19.629045Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:08:19.630152Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-706028 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T10:08:19.632392Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:08:19.632513Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:08:19.632565Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:08:19.632604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:08:19.633487Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:08:19.63385Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T10:08:19.634218Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T10:08:19.634241Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T10:08:19.636055Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:09:10 up  2:51,  0 user,  load average: 4.52, 4.08, 3.16
	Linux old-k8s-version-706028 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a1064f619b2062924c5ac254c6e8262777e2aa3c55deb8662baab32e2632bc72] <==
	I1123 10:08:43.268370       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:08:43.357696       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:08:43.357921       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:08:43.357963       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:08:43.358043       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:08:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:08:43.558841       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:08:43.559169       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:08:43.559240       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:08:43.561862       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:08:43.759345       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:08:43.759429       1 metrics.go:72] Registering metrics
	I1123 10:08:43.759503       1 controller.go:711] "Syncing nftables rules"
	I1123 10:08:53.563576       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:08:53.563629       1 main.go:301] handling current node
	I1123 10:09:03.558695       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:09:03.558736       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bfee4a2cab8329a016547add1d5e648b6b6fbdbffbb23ddab59fd029390b4cce] <==
	I1123 10:08:23.407688       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 10:08:23.408310       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 10:08:23.410047       1 aggregator.go:166] initial CRD sync complete...
	I1123 10:08:23.410076       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 10:08:23.410084       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:08:23.410091       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:08:23.413475       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 10:08:23.413500       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 10:08:23.415190       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 10:08:23.434129       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:08:24.113340       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:08:24.118284       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:08:24.118530       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:08:24.778535       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:08:24.851310       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:08:25.039363       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:08:25.053302       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 10:08:25.054515       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 10:08:25.060657       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:08:25.344904       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 10:08:26.690292       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 10:08:26.719168       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:08:26.752005       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 10:08:38.568252       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1123 10:08:38.965857       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [13afdc5449ace2348d71f739fd2d008936276bb385a4c8ca37e93c0d66eb0ee2] <==
	I1123 10:08:38.354023       1 shared_informer.go:318] Caches are synced for service account
	I1123 10:08:38.358473       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1123 10:08:38.386704       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1123 10:08:38.412135       1 shared_informer.go:318] Caches are synced for endpoint
	I1123 10:08:38.589547       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6l8w5"
	I1123 10:08:38.593942       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-s9rqv"
	I1123 10:08:38.767948       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:08:38.779256       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:08:38.779286       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 10:08:38.975989       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 10:08:39.217703       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mjkt5"
	I1123 10:08:39.229183       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-h6b8n"
	I1123 10:08:39.242902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="268.008015ms"
	I1123 10:08:39.267721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.766605ms"
	I1123 10:08:39.267808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.845µs"
	I1123 10:08:41.030023       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 10:08:41.069087       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-mjkt5"
	I1123 10:08:41.089333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.908713ms"
	I1123 10:08:41.102192       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.799926ms"
	I1123 10:08:41.102476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.842µs"
	I1123 10:08:54.078259       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.101907ms"
	I1123 10:08:54.128898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="200.192µs"
	I1123 10:08:55.406708       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.533847ms"
	I1123 10:08:55.408349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.475µs"
	I1123 10:08:58.241868       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [944db13d5180618016f0b96a0e5e9fd6e72855ed0e64a5cd3cc27c3e17a0af76] <==
	I1123 10:08:40.115268       1 server_others.go:69] "Using iptables proxy"
	I1123 10:08:40.144357       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 10:08:40.193345       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:08:40.198601       1 server_others.go:152] "Using iptables Proxier"
	I1123 10:08:40.198646       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 10:08:40.198654       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 10:08:40.198690       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 10:08:40.198902       1 server.go:846] "Version info" version="v1.28.0"
	I1123 10:08:40.198919       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:08:40.200271       1 config.go:188] "Starting service config controller"
	I1123 10:08:40.200284       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 10:08:40.200300       1 config.go:97] "Starting endpoint slice config controller"
	I1123 10:08:40.200304       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 10:08:40.200691       1 config.go:315] "Starting node config controller"
	I1123 10:08:40.200698       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 10:08:40.301183       1 shared_informer.go:318] Caches are synced for service config
	I1123 10:08:40.301233       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 10:08:40.301549       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c5123a17625b58f9982633f02af5587191ffc95e7bd4ee9db6d34344d20315c2] <==
	W1123 10:08:23.401643       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1123 10:08:23.401678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1123 10:08:23.401731       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 10:08:23.401767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 10:08:23.401798       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1123 10:08:23.401841       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1123 10:08:23.401882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 10:08:23.401900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 10:08:23.401811       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 10:08:23.401956       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 10:08:23.401941       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 10:08:23.402014       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 10:08:23.402032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 10:08:23.402021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 10:08:23.402082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 10:08:23.402096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 10:08:23.402277       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 10:08:23.402338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 10:08:24.209984       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 10:08:24.210023       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1123 10:08:24.308201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 10:08:24.308250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1123 10:08:24.541694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 10:08:24.541814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1123 10:08:24.991735       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 10:08:38 old-k8s-version-706028 kubelet[1362]: I1123 10:08:38.710635    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8gnv\" (UniqueName: \"kubernetes.io/projected/2aea0615-8684-4805-8c5d-f37fb042cc30-kube-api-access-m8gnv\") pod \"kube-proxy-s9rqv\" (UID: \"2aea0615-8684-4805-8c5d-f37fb042cc30\") " pod="kube-system/kube-proxy-s9rqv"
	Nov 23 10:08:38 old-k8s-version-706028 kubelet[1362]: I1123 10:08:38.710714    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2aea0615-8684-4805-8c5d-f37fb042cc30-xtables-lock\") pod \"kube-proxy-s9rqv\" (UID: \"2aea0615-8684-4805-8c5d-f37fb042cc30\") " pod="kube-system/kube-proxy-s9rqv"
	Nov 23 10:08:38 old-k8s-version-706028 kubelet[1362]: I1123 10:08:38.710747    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3045e3bc-b846-45c6-a4ff-39e877bbf8ef-cni-cfg\") pod \"kindnet-6l8w5\" (UID: \"3045e3bc-b846-45c6-a4ff-39e877bbf8ef\") " pod="kube-system/kindnet-6l8w5"
	Nov 23 10:08:38 old-k8s-version-706028 kubelet[1362]: I1123 10:08:38.710797    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3045e3bc-b846-45c6-a4ff-39e877bbf8ef-lib-modules\") pod \"kindnet-6l8w5\" (UID: \"3045e3bc-b846-45c6-a4ff-39e877bbf8ef\") " pod="kube-system/kindnet-6l8w5"
	Nov 23 10:08:38 old-k8s-version-706028 kubelet[1362]: I1123 10:08:38.710886    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3045e3bc-b846-45c6-a4ff-39e877bbf8ef-xtables-lock\") pod \"kindnet-6l8w5\" (UID: \"3045e3bc-b846-45c6-a4ff-39e877bbf8ef\") " pod="kube-system/kindnet-6l8w5"
	Nov 23 10:08:40 old-k8s-version-706028 kubelet[1362]: I1123 10:08:40.308519    1362 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-s9rqv" podStartSLOduration=2.307555298 podCreationTimestamp="2025-11-23 10:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:08:40.307157712 +0000 UTC m=+13.693123870" watchObservedRunningTime="2025-11-23 10:08:40.307555298 +0000 UTC m=+13.693521457"
	Nov 23 10:08:47 old-k8s-version-706028 kubelet[1362]: I1123 10:08:47.178493    1362 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-6l8w5" podStartSLOduration=5.776341127 podCreationTimestamp="2025-11-23 10:08:38 +0000 UTC" firstStartedPulling="2025-11-23 10:08:39.815758565 +0000 UTC m=+13.201724732" lastFinishedPulling="2025-11-23 10:08:43.217852733 +0000 UTC m=+16.603818892" observedRunningTime="2025-11-23 10:08:43.336066694 +0000 UTC m=+16.722032853" watchObservedRunningTime="2025-11-23 10:08:47.178435287 +0000 UTC m=+20.564401471"
	Nov 23 10:08:54 old-k8s-version-706028 kubelet[1362]: I1123 10:08:54.019087    1362 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 10:08:54 old-k8s-version-706028 kubelet[1362]: I1123 10:08:54.063486    1362 topology_manager.go:215] "Topology Admit Handler" podUID="11c29962-a28a-4015-9014-96acb48fefc1" podNamespace="kube-system" podName="coredns-5dd5756b68-h6b8n"
	Nov 23 10:08:54 old-k8s-version-706028 kubelet[1362]: I1123 10:08:54.077274    1362 topology_manager.go:215] "Topology Admit Handler" podUID="4bc52b3c-0d21-412d-bf6b-74f8dab91ac1" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 10:08:54 old-k8s-version-706028 kubelet[1362]: I1123 10:08:54.142278    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9hp5\" (UniqueName: \"kubernetes.io/projected/4bc52b3c-0d21-412d-bf6b-74f8dab91ac1-kube-api-access-z9hp5\") pod \"storage-provisioner\" (UID: \"4bc52b3c-0d21-412d-bf6b-74f8dab91ac1\") " pod="kube-system/storage-provisioner"
	Nov 23 10:08:54 old-k8s-version-706028 kubelet[1362]: I1123 10:08:54.142348    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5bz2\" (UniqueName: \"kubernetes.io/projected/11c29962-a28a-4015-9014-96acb48fefc1-kube-api-access-c5bz2\") pod \"coredns-5dd5756b68-h6b8n\" (UID: \"11c29962-a28a-4015-9014-96acb48fefc1\") " pod="kube-system/coredns-5dd5756b68-h6b8n"
	Nov 23 10:08:54 old-k8s-version-706028 kubelet[1362]: I1123 10:08:54.142384    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11c29962-a28a-4015-9014-96acb48fefc1-config-volume\") pod \"coredns-5dd5756b68-h6b8n\" (UID: \"11c29962-a28a-4015-9014-96acb48fefc1\") " pod="kube-system/coredns-5dd5756b68-h6b8n"
	Nov 23 10:08:54 old-k8s-version-706028 kubelet[1362]: I1123 10:08:54.142418    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4bc52b3c-0d21-412d-bf6b-74f8dab91ac1-tmp\") pod \"storage-provisioner\" (UID: \"4bc52b3c-0d21-412d-bf6b-74f8dab91ac1\") " pod="kube-system/storage-provisioner"
	Nov 23 10:08:54 old-k8s-version-706028 kubelet[1362]: W1123 10:08:54.445232    1362 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/crio-194ce1c4a0963bbf62d53a6fc94de0e2d98acf9c2ebe8644842902db85cc95ce WatchSource:0}: Error finding container 194ce1c4a0963bbf62d53a6fc94de0e2d98acf9c2ebe8644842902db85cc95ce: Status 404 returned error can't find the container with id 194ce1c4a0963bbf62d53a6fc94de0e2d98acf9c2ebe8644842902db85cc95ce
	Nov 23 10:08:55 old-k8s-version-706028 kubelet[1362]: I1123 10:08:55.383308    1362 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-h6b8n" podStartSLOduration=16.383252894 podCreationTimestamp="2025-11-23 10:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:08:55.382789058 +0000 UTC m=+28.768755233" watchObservedRunningTime="2025-11-23 10:08:55.383252894 +0000 UTC m=+28.769219053"
	Nov 23 10:08:55 old-k8s-version-706028 kubelet[1362]: I1123 10:08:55.383880    1362 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.383840244 podCreationTimestamp="2025-11-23 10:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:08:55.361095897 +0000 UTC m=+28.747062056" watchObservedRunningTime="2025-11-23 10:08:55.383840244 +0000 UTC m=+28.769806796"
	Nov 23 10:08:57 old-k8s-version-706028 kubelet[1362]: I1123 10:08:57.657487    1362 topology_manager.go:215] "Topology Admit Handler" podUID="3d8762ee-c527-4c0e-9d25-4aa79457ae6b" podNamespace="default" podName="busybox"
	Nov 23 10:08:57 old-k8s-version-706028 kubelet[1362]: W1123 10:08:57.664059    1362 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-706028" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-706028' and this object
	Nov 23 10:08:57 old-k8s-version-706028 kubelet[1362]: E1123 10:08:57.664236    1362 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-706028" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-706028' and this object
	Nov 23 10:08:57 old-k8s-version-706028 kubelet[1362]: I1123 10:08:57.778795    1362 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjw68\" (UniqueName: \"kubernetes.io/projected/3d8762ee-c527-4c0e-9d25-4aa79457ae6b-kube-api-access-vjw68\") pod \"busybox\" (UID: \"3d8762ee-c527-4c0e-9d25-4aa79457ae6b\") " pod="default/busybox"
	Nov 23 10:08:58 old-k8s-version-706028 kubelet[1362]: E1123 10:08:58.892450    1362 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 23 10:08:58 old-k8s-version-706028 kubelet[1362]: E1123 10:08:58.892517    1362 projected.go:198] Error preparing data for projected volume kube-api-access-vjw68 for pod default/busybox: failed to sync configmap cache: timed out waiting for the condition
	Nov 23 10:08:58 old-k8s-version-706028 kubelet[1362]: E1123 10:08:58.892927    1362 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d8762ee-c527-4c0e-9d25-4aa79457ae6b-kube-api-access-vjw68 podName:3d8762ee-c527-4c0e-9d25-4aa79457ae6b nodeName:}" failed. No retries permitted until 2025-11-23 10:08:59.39259512 +0000 UTC m=+32.778561278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vjw68" (UniqueName: "kubernetes.io/projected/3d8762ee-c527-4c0e-9d25-4aa79457ae6b-kube-api-access-vjw68") pod "busybox" (UID: "3d8762ee-c527-4c0e-9d25-4aa79457ae6b") : failed to sync configmap cache: timed out waiting for the condition
	Nov 23 10:08:59 old-k8s-version-706028 kubelet[1362]: W1123 10:08:59.791685    1362 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/crio-4da1830bbcc9fd9c92ea8e9be392bbdd6e9b8f444b29e2f935158282a7c80249 WatchSource:0}: Error finding container 4da1830bbcc9fd9c92ea8e9be392bbdd6e9b8f444b29e2f935158282a7c80249: Status 404 returned error can't find the container with id 4da1830bbcc9fd9c92ea8e9be392bbdd6e9b8f444b29e2f935158282a7c80249
	
	
	==> storage-provisioner [f3373b10e199148a7e153af13968aee7d5649ee93979f4251998c5101a4585da] <==
	I1123 10:08:54.468607       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:08:54.511039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:08:54.511181       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 10:08:54.521520       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:08:54.523755       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6338f536-3941-4183-9bc9-75c073ed286e", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-706028_603c8f71-c623-42c1-b568-63a3f84f11ef became leader
	I1123 10:08:54.534129       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706028_603c8f71-c623-42c1-b568-63a3f84f11ef!
	I1123 10:08:54.634952       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706028_603c8f71-c623-42c1-b568-63a3f84f11ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-706028 -n old-k8s-version-706028
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-706028 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-020224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-020224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (323.240695ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-020224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-020224 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-020224 describe deploy/metrics-server -n kube-system: exit status 1 (138.379059ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-020224 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-020224
helpers_test.go:243: (dbg) docker inspect no-preload-020224:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0",
	        "Created": "2025-11-23T10:09:02.634228682Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 507358,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:09:02.740080174Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/hostname",
	        "HostsPath": "/var/lib/docker/containers/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/hosts",
	        "LogPath": "/var/lib/docker/containers/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0-json.log",
	        "Name": "/no-preload-020224",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-020224:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-020224",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0",
	                "LowerDir": "/var/lib/docker/overlay2/fa5d3a25bcb7f58c03a8da4f93eb6974e9507a851f3a34e8ca39457b619a17bf-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa5d3a25bcb7f58c03a8da4f93eb6974e9507a851f3a34e8ca39457b619a17bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa5d3a25bcb7f58c03a8da4f93eb6974e9507a851f3a34e8ca39457b619a17bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa5d3a25bcb7f58c03a8da4f93eb6974e9507a851f3a34e8ca39457b619a17bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-020224",
	                "Source": "/var/lib/docker/volumes/no-preload-020224/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-020224",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-020224",
	                "name.minikube.sigs.k8s.io": "no-preload-020224",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4a9565ecfda13fd14b28b4f6555166ba56fd6dc48e7a7a3654cf4e7663aea398",
	            "SandboxKey": "/var/run/docker/netns/4a9565ecfda1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-020224": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:79:e3:75:33:aa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5bdf554cce75de475d0aa700ed33b59629266aa02ea95fbb3579c79c5e0148ad",
	                    "EndpointID": "4000b55606e976c476a6850f6c49f355933cc13c4bffc3e4df0b2b30cd23a437",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-020224",
	                        "18d5b0a18428"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020224 -n no-preload-020224
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-020224 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-020224 logs -n 25: (1.190889674s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p calico-507563 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo docker system info                                                                                                                                                                                                      │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cri-dockerd --version                                                                                                                                                                                                   │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo containerd config dump                                                                                                                                                                                                  │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo crio config                                                                                                                                                                                                             │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ delete  │ -p calico-507563                                                                                                                                                                                                                              │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	│ stop    │ -p old-k8s-version-706028 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-706028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p old-k8s-version-706028 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-020224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:09:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:09:25.816605  510025 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:09:25.816804  510025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:09:25.816840  510025 out.go:374] Setting ErrFile to fd 2...
	I1123 10:09:25.816861  510025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:09:25.817185  510025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:09:25.818560  510025 out.go:368] Setting JSON to false
	I1123 10:09:25.819894  510025 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10315,"bootTime":1763882251,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:09:25.819991  510025 start.go:143] virtualization:  
	I1123 10:09:25.825487  510025 out.go:179] * [old-k8s-version-706028] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:09:25.828661  510025 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:09:25.828713  510025 notify.go:221] Checking for updates...
	I1123 10:09:25.832718  510025 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:09:25.836124  510025 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:09:25.839233  510025 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:09:25.842224  510025 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:09:25.845044  510025 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:09:25.848550  510025 config.go:182] Loaded profile config "old-k8s-version-706028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:09:25.852238  510025 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 10:09:25.855323  510025 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:09:25.903237  510025 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:09:25.903350  510025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:09:26.147979  510025 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 10:09:26.130627918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:09:26.148097  510025 docker.go:319] overlay module found
	I1123 10:09:26.151260  510025 out.go:179] * Using the docker driver based on existing profile
	I1123 10:09:23.838253  507023 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.596998991s)
	I1123 10:09:23.838283  507023 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1123 10:09:23.838302  507023 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1123 10:09:23.838375  507023 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1123 10:09:24.620329  507023 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1123 10:09:24.620360  507023 cache_images.go:125] Successfully loaded all cached images
	I1123 10:09:24.620366  507023 cache_images.go:94] duration metric: took 14.298706809s to LoadCachedImages
	I1123 10:09:24.620378  507023 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 10:09:24.620469  507023 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-020224 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:09:24.620550  507023 ssh_runner.go:195] Run: crio config
	I1123 10:09:24.691576  507023 cni.go:84] Creating CNI manager for ""
	I1123 10:09:24.691650  507023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:09:24.691684  507023 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:09:24.691736  507023 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-020224 NodeName:no-preload-020224 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:09:24.691911  507023 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-020224"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:09:24.692015  507023 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:09:24.700497  507023 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1123 10:09:24.700576  507023 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1123 10:09:24.708887  507023 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1123 10:09:24.708901  507023 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1123 10:09:24.708930  507023 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1123 10:09:24.709239  507023 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1123 10:09:24.714192  507023 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1123 10:09:24.714228  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1123 10:09:25.758823  507023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:09:25.782186  507023 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1123 10:09:25.786247  507023 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1123 10:09:25.786286  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1123 10:09:25.793611  507023 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1123 10:09:25.815702  507023 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1123 10:09:25.815733  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1123 10:09:26.154246  510025 start.go:309] selected driver: docker
	I1123 10:09:26.154265  510025 start.go:927] validating driver "docker" against &{Name:old-k8s-version-706028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706028 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:09:26.154366  510025 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:09:26.155353  510025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:09:26.350909  510025 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 10:09:26.339020749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:09:26.351249  510025 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:09:26.351268  510025 cni.go:84] Creating CNI manager for ""
	I1123 10:09:26.351323  510025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:09:26.351355  510025 start.go:353] cluster config:
	{Name:old-k8s-version-706028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:09:26.354737  510025 out.go:179] * Starting "old-k8s-version-706028" primary control-plane node in "old-k8s-version-706028" cluster
	I1123 10:09:26.357772  510025 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:09:26.360939  510025 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:09:26.363845  510025 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 10:09:26.363890  510025 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 10:09:26.363905  510025 cache.go:65] Caching tarball of preloaded images
	I1123 10:09:26.364010  510025 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:09:26.364291  510025 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:09:26.364306  510025 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 10:09:26.364414  510025 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/config.json ...
	I1123 10:09:26.442931  510025 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:09:26.442958  510025 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:09:26.442973  510025 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:09:26.443005  510025 start.go:360] acquireMachinesLock for old-k8s-version-706028: {Name:mkc18f399d53c3cb3fccf9a7a08ad7a013834dfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:09:26.443075  510025 start.go:364] duration metric: took 41.864µs to acquireMachinesLock for "old-k8s-version-706028"
	I1123 10:09:26.443100  510025 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:09:26.443105  510025 fix.go:54] fixHost starting: 
	I1123 10:09:26.443366  510025 cli_runner.go:164] Run: docker container inspect old-k8s-version-706028 --format={{.State.Status}}
	I1123 10:09:26.499045  510025 fix.go:112] recreateIfNeeded on old-k8s-version-706028: state=Stopped err=<nil>
	W1123 10:09:26.499074  510025 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:09:26.616084  507023 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:09:26.626854  507023 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:09:26.651428  507023 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:09:26.669586  507023 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 10:09:26.699724  507023 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:09:26.703588  507023 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:09:26.713718  507023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:09:26.886815  507023 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:09:26.905260  507023 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224 for IP: 192.168.85.2
	I1123 10:09:26.905278  507023 certs.go:195] generating shared ca certs ...
	I1123 10:09:26.905297  507023 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:26.905445  507023 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:09:26.905495  507023 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:09:26.905504  507023 certs.go:257] generating profile certs ...
	I1123 10:09:26.905556  507023 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.key
	I1123 10:09:26.905566  507023 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.crt with IP's: []
	I1123 10:09:27.397684  507023 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.crt ...
	I1123 10:09:27.397758  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.crt: {Name:mka9c1ced24aa3b11a897581db54eee96552e175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:27.398010  507023 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.key ...
	I1123 10:09:27.398051  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.key: {Name:mk59968ca778aae4afdab8270d7f3819ccf3d5c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:27.399875  507023 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key.d87566b3
	I1123 10:09:27.399951  507023 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt.d87566b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 10:09:27.583852  507023 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt.d87566b3 ...
	I1123 10:09:27.585952  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt.d87566b3: {Name:mk236b3518a6eed5134f9b2df5f74ef82cc2c700 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:27.588093  507023 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key.d87566b3 ...
	I1123 10:09:27.588159  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key.d87566b3: {Name:mkcf10d547d84f16a6e995b1f68dd90878114d77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:27.588360  507023 certs.go:382] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt.d87566b3 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt
	I1123 10:09:27.588477  507023 certs.go:386] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key.d87566b3 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key
	I1123 10:09:27.588580  507023 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.key
	I1123 10:09:27.588626  507023 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.crt with IP's: []
	I1123 10:09:27.983398  507023 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.crt ...
	I1123 10:09:27.983472  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.crt: {Name:mka4b2bc3a3f34803c036958ba4ccf37c25d1d49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:27.983676  507023 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.key ...
	I1123 10:09:27.983718  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.key: {Name:mkb311f5d3f3360de8949fed7bef66d4cce7e547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:27.983949  507023 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:09:27.984017  507023 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:09:27.984044  507023 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:09:27.984095  507023 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:09:27.984142  507023 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:09:27.984187  507023 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:09:27.984259  507023 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:09:27.984839  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:09:28.007669  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:09:28.031070  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:09:28.053034  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:09:28.086948  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:09:28.107917  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:09:28.126138  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:09:28.145910  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:09:28.164185  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:09:28.182701  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:09:28.200422  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:09:28.217200  507023 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:09:28.229856  507023 ssh_runner.go:195] Run: openssl version
	I1123 10:09:28.236225  507023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:09:28.244515  507023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:09:28.248365  507023 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:09:28.248487  507023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:09:28.289468  507023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:09:28.297972  507023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:09:28.306132  507023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:09:28.310156  507023 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:09:28.310248  507023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:09:28.350913  507023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:09:28.359461  507023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:09:28.369292  507023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:09:28.373267  507023 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:09:28.373384  507023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:09:28.414655  507023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:09:28.423253  507023 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:09:28.427087  507023 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:09:28.427142  507023 kubeadm.go:401] StartCluster: {Name:no-preload-020224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:09:28.427217  507023 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:09:28.427279  507023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:09:28.453943  507023 cri.go:89] found id: ""
	I1123 10:09:28.454025  507023 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:09:28.462251  507023 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:09:28.475099  507023 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:09:28.475165  507023 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:09:28.483145  507023 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:09:28.483170  507023 kubeadm.go:158] found existing configuration files:
	
	I1123 10:09:28.483229  507023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:09:28.490837  507023 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:09:28.490951  507023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:09:28.498450  507023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:09:28.506274  507023 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:09:28.506398  507023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:09:28.514175  507023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:09:28.522316  507023 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:09:28.522432  507023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:09:28.529966  507023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:09:28.538073  507023 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:09:28.538172  507023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:09:28.545850  507023 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:09:28.584444  507023 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:09:28.584672  507023 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:09:28.614264  507023 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:09:28.614343  507023 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:09:28.614387  507023 kubeadm.go:319] OS: Linux
	I1123 10:09:28.614434  507023 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:09:28.614487  507023 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:09:28.614538  507023 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:09:28.614590  507023 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:09:28.614642  507023 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:09:28.614695  507023 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:09:28.614744  507023 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:09:28.614795  507023 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:09:28.614845  507023 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:09:28.693040  507023 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:09:28.693231  507023 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:09:28.693375  507023 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:09:28.708070  507023 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:09:26.502788  510025 out.go:252] * Restarting existing docker container for "old-k8s-version-706028" ...
	I1123 10:09:26.502927  510025 cli_runner.go:164] Run: docker start old-k8s-version-706028
	I1123 10:09:26.842666  510025 cli_runner.go:164] Run: docker container inspect old-k8s-version-706028 --format={{.State.Status}}
	I1123 10:09:26.869002  510025 kic.go:430] container "old-k8s-version-706028" state is running.
	I1123 10:09:26.871983  510025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706028
	I1123 10:09:26.905549  510025 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/config.json ...
	I1123 10:09:26.905768  510025 machine.go:94] provisionDockerMachine start ...
	I1123 10:09:26.905823  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:26.966070  510025 main.go:143] libmachine: Using SSH client type: native
	I1123 10:09:26.966431  510025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1123 10:09:26.966439  510025 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:09:26.967504  510025 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:09:30.145489  510025 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-706028
	
	I1123 10:09:30.145567  510025 ubuntu.go:182] provisioning hostname "old-k8s-version-706028"
	I1123 10:09:30.145672  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:30.175168  510025 main.go:143] libmachine: Using SSH client type: native
	I1123 10:09:30.175494  510025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1123 10:09:30.175513  510025 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-706028 && echo "old-k8s-version-706028" | sudo tee /etc/hostname
	I1123 10:09:30.348439  510025 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-706028
	
	I1123 10:09:30.348567  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:30.371755  510025 main.go:143] libmachine: Using SSH client type: native
	I1123 10:09:30.372077  510025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1123 10:09:30.372101  510025 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-706028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-706028/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-706028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:09:30.529699  510025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:09:30.529765  510025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:09:30.529803  510025 ubuntu.go:190] setting up certificates
	I1123 10:09:30.529844  510025 provision.go:84] configureAuth start
	I1123 10:09:30.529921  510025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706028
	I1123 10:09:30.551738  510025 provision.go:143] copyHostCerts
	I1123 10:09:30.551808  510025 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:09:30.551816  510025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:09:30.551891  510025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:09:30.551987  510025 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:09:30.551992  510025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:09:30.552017  510025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:09:30.552069  510025 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:09:30.552073  510025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:09:30.552097  510025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:09:30.552141  510025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-706028 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-706028]
	I1123 10:09:30.761442  510025 provision.go:177] copyRemoteCerts
	I1123 10:09:30.761561  510025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:09:30.761630  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:30.778968  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:28.714115  507023 out.go:252]   - Generating certificates and keys ...
	I1123 10:09:28.714230  507023 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:09:28.714309  507023 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:09:29.080677  507023 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:09:29.157548  507023 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:09:29.332005  507023 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:09:30.090914  507023 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:09:30.894112  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 10:09:30.927541  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:09:30.948670  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:09:30.968656  510025 provision.go:87] duration metric: took 438.773277ms to configureAuth
	I1123 10:09:30.968684  510025 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:09:30.968872  510025 config.go:182] Loaded profile config "old-k8s-version-706028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:09:30.968979  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:30.987669  510025 main.go:143] libmachine: Using SSH client type: native
	I1123 10:09:30.987985  510025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1123 10:09:30.988005  510025 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:09:31.389807  510025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:09:31.389827  510025 machine.go:97] duration metric: took 4.484048435s to provisionDockerMachine
	I1123 10:09:31.389838  510025 start.go:293] postStartSetup for "old-k8s-version-706028" (driver="docker")
	I1123 10:09:31.389861  510025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:09:31.389921  510025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:09:31.389969  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:31.422998  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:31.533527  510025 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:09:31.537904  510025 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:09:31.537928  510025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:09:31.537939  510025 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:09:31.537997  510025 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:09:31.538071  510025 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:09:31.538167  510025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:09:31.547212  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:09:31.566787  510025 start.go:296] duration metric: took 176.934739ms for postStartSetup
	I1123 10:09:31.566876  510025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:09:31.566914  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:31.586038  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:31.695308  510025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:09:31.700925  510025 fix.go:56] duration metric: took 5.257813295s for fixHost
	I1123 10:09:31.700951  510025 start.go:83] releasing machines lock for "old-k8s-version-706028", held for 5.25786347s
	I1123 10:09:31.701025  510025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706028
	I1123 10:09:31.717675  510025 ssh_runner.go:195] Run: cat /version.json
	I1123 10:09:31.717731  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:31.717982  510025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:09:31.718048  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:31.750832  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:31.759857  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:31.972320  510025 ssh_runner.go:195] Run: systemctl --version
	I1123 10:09:31.979274  510025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:09:32.024162  510025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:09:32.029896  510025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:09:32.029986  510025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:09:32.039008  510025 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:09:32.039052  510025 start.go:496] detecting cgroup driver to use...
	I1123 10:09:32.039088  510025 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:09:32.039162  510025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:09:32.055896  510025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:09:32.070659  510025 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:09:32.070741  510025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:09:32.087808  510025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:09:32.102639  510025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:09:32.249883  510025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:09:32.420466  510025 docker.go:234] disabling docker service ...
	I1123 10:09:32.420539  510025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:09:32.436232  510025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:09:32.452606  510025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:09:32.590344  510025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:09:32.741257  510025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:09:32.755631  510025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:09:32.770001  510025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1123 10:09:32.770115  510025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.778798  510025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:09:32.778903  510025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.787709  510025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.796376  510025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.805076  510025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:09:32.812834  510025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.821375  510025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.829591  510025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.838369  510025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:09:32.846236  510025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:09:32.853580  510025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:09:32.996304  510025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:09:33.201736  510025 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:09:33.201845  510025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:09:33.210124  510025 start.go:564] Will wait 60s for crictl version
	I1123 10:09:33.210242  510025 ssh_runner.go:195] Run: which crictl
	I1123 10:09:33.214251  510025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:09:33.259548  510025 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:09:33.259706  510025 ssh_runner.go:195] Run: crio --version
	I1123 10:09:33.292540  510025 ssh_runner.go:195] Run: crio --version
	I1123 10:09:33.335336  510025 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1123 10:09:33.337796  510025 cli_runner.go:164] Run: docker network inspect old-k8s-version-706028 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:09:33.363399  510025 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:09:33.367380  510025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:09:33.377221  510025 kubeadm.go:884] updating cluster {Name:old-k8s-version-706028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706028 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:09:33.377328  510025 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 10:09:33.377380  510025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:09:33.433278  510025 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:09:33.433297  510025 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:09:33.433350  510025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:09:33.484055  510025 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:09:33.484129  510025 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:09:33.484151  510025 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1123 10:09:33.484289  510025 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-706028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:09:33.484411  510025 ssh_runner.go:195] Run: crio config
	I1123 10:09:33.574456  510025 cni.go:84] Creating CNI manager for ""
	I1123 10:09:33.574527  510025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:09:33.574562  510025 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:09:33.574614  510025 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-706028 NodeName:old-k8s-version-706028 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:09:33.574801  510025 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-706028"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:09:33.574916  510025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 10:09:33.583770  510025 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:09:33.583888  510025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:09:33.592261  510025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1123 10:09:33.612594  510025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:09:33.633150  510025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1123 10:09:33.652223  510025 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:09:33.656430  510025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:09:33.666775  510025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:09:33.796040  510025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:09:33.810658  510025 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028 for IP: 192.168.76.2
	I1123 10:09:33.810720  510025 certs.go:195] generating shared ca certs ...
	I1123 10:09:33.810758  510025 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:33.812518  510025 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:09:33.812630  510025 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:09:33.812669  510025 certs.go:257] generating profile certs ...
	I1123 10:09:33.812819  510025 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.key
	I1123 10:09:33.812924  510025 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/apiserver.key.494e02ad
	I1123 10:09:33.813028  510025 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/proxy-client.key
	I1123 10:09:33.813198  510025 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:09:33.813266  510025 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:09:33.813291  510025 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:09:33.813348  510025 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:09:33.813437  510025 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:09:33.813508  510025 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:09:33.813598  510025 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:09:33.814304  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:09:33.846039  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:09:33.886031  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:09:33.929221  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:09:33.983668  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 10:09:34.031411  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:09:34.094565  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:09:34.138846  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:09:34.166231  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:09:34.185743  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:09:34.204334  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:09:34.222295  510025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:09:34.235383  510025 ssh_runner.go:195] Run: openssl version
	I1123 10:09:34.242256  510025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:09:34.250714  510025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:09:34.254462  510025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:09:34.254565  510025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:09:34.301841  510025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:09:34.310849  510025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:09:34.319474  510025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:09:34.323372  510025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:09:34.323484  510025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:09:34.367045  510025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:09:34.375453  510025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:09:34.384024  510025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:09:34.387853  510025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:09:34.387963  510025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:09:34.431172  510025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:09:34.439459  510025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:09:34.443711  510025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:09:34.490008  510025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:09:34.533374  510025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:09:34.590145  510025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:09:34.702355  510025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:09:34.785341  510025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:09:34.845048  510025 kubeadm.go:401] StartCluster: {Name:old-k8s-version-706028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706028 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:09:34.845150  510025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:09:34.845214  510025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:09:35.032137  510025 cri.go:89] found id: "34ee70a0be166e12e57fb579eaa0cb22b8873a626bdc6ae8d83d81bfcbff7280"
	I1123 10:09:35.032160  510025 cri.go:89] found id: "98f50d387d5b2fded7f07e260ceb83bce5a609dc2bd07303f78f93578f6d82ed"
	I1123 10:09:35.032166  510025 cri.go:89] found id: "676b2dbee75eee912c3a604195863ba16974dcbd9b686ff17513a405a42b3e91"
	I1123 10:09:35.032175  510025 cri.go:89] found id: "ea67be45b14c0ca0ac41632b23ebd8095b8b2a16235fddfd8d5a4b1519577720"
	I1123 10:09:35.032179  510025 cri.go:89] found id: ""
	I1123 10:09:35.032229  510025 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:09:35.091309  510025 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:09:35Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:09:35.091403  510025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:09:35.125779  510025 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:09:35.125800  510025 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:09:35.125866  510025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:09:35.161965  510025 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:09:35.162380  510025 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-706028" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:09:35.162489  510025 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-706028" cluster setting kubeconfig missing "old-k8s-version-706028" context setting]
	I1123 10:09:35.162844  510025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:35.164106  510025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:09:35.205918  510025 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:09:35.205955  510025 kubeadm.go:602] duration metric: took 80.147689ms to restartPrimaryControlPlane
	I1123 10:09:35.205965  510025 kubeadm.go:403] duration metric: took 360.928776ms to StartCluster
	I1123 10:09:35.205982  510025 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:35.206051  510025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:09:35.206650  510025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:35.206865  510025 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:09:35.207183  510025 config.go:182] Loaded profile config "old-k8s-version-706028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:09:35.207233  510025 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:09:35.207369  510025 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-706028"
	I1123 10:09:35.207390  510025 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-706028"
	W1123 10:09:35.207406  510025 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:09:35.207428  510025 host.go:66] Checking if "old-k8s-version-706028" exists ...
	I1123 10:09:35.207937  510025 cli_runner.go:164] Run: docker container inspect old-k8s-version-706028 --format={{.State.Status}}
	I1123 10:09:35.208296  510025 addons.go:70] Setting dashboard=true in profile "old-k8s-version-706028"
	I1123 10:09:35.208332  510025 addons.go:239] Setting addon dashboard=true in "old-k8s-version-706028"
	W1123 10:09:35.208342  510025 addons.go:248] addon dashboard should already be in state true
	I1123 10:09:35.208366  510025 host.go:66] Checking if "old-k8s-version-706028" exists ...
	I1123 10:09:35.208602  510025 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-706028"
	I1123 10:09:35.208619  510025 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-706028"
	I1123 10:09:35.208864  510025 cli_runner.go:164] Run: docker container inspect old-k8s-version-706028 --format={{.State.Status}}
	I1123 10:09:35.209320  510025 cli_runner.go:164] Run: docker container inspect old-k8s-version-706028 --format={{.State.Status}}
	I1123 10:09:35.212760  510025 out.go:179] * Verifying Kubernetes components...
	I1123 10:09:35.220601  510025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:09:35.253078  510025 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:09:35.256625  510025 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:09:35.264273  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:09:35.264307  510025 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:09:35.264395  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:35.275400  510025 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:09:35.276507  510025 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-706028"
	W1123 10:09:35.276525  510025 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:09:35.276550  510025 host.go:66] Checking if "old-k8s-version-706028" exists ...
	I1123 10:09:35.276978  510025 cli_runner.go:164] Run: docker container inspect old-k8s-version-706028 --format={{.State.Status}}
	I1123 10:09:35.279173  510025 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:09:35.279198  510025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:09:35.279264  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:35.320480  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:35.338505  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:35.341705  510025 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:09:35.341724  510025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:09:35.341785  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:35.370465  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:35.716966  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:09:35.717046  510025 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:09:35.729966  510025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:09:35.746727  510025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:09:32.026756  507023 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:09:32.027354  507023 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-020224] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:09:32.865744  507023 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:09:32.865886  507023 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-020224] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:09:33.737912  507023 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:09:34.527711  507023 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:09:35.657738  507023 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:09:35.659174  507023 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:09:36.865749  507023 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:09:37.179989  507023 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:09:37.713652  507023 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:09:37.857350  507023 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:09:38.059996  507023 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:09:38.061199  507023 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:09:38.072369  507023 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:09:35.843403  510025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:09:35.859066  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:09:35.859138  510025 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:09:35.886520  510025 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-706028" to be "Ready" ...
	I1123 10:09:35.977880  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:09:35.977956  510025 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:09:36.134230  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:09:36.134302  510025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:09:36.234173  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:09:36.234246  510025 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:09:36.313888  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:09:36.313967  510025 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:09:36.349608  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:09:36.349681  510025 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:09:36.380793  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:09:36.380868  510025 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:09:36.438939  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:09:36.439013  510025 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:09:36.468020  510025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:09:38.075902  507023 out.go:252]   - Booting up control plane ...
	I1123 10:09:38.076014  507023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:09:38.076345  507023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:09:38.077797  507023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:09:38.095384  507023 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:09:38.095493  507023 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:09:38.105104  507023 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:09:38.105206  507023 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:09:38.105245  507023 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:09:38.317472  507023 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:09:38.317595  507023 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:09:39.321755  507023 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000868856s
	I1123 10:09:39.321863  507023 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:09:39.321944  507023 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 10:09:39.322033  507023 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:09:39.322111  507023 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:09:42.012477  510025 node_ready.go:49] node "old-k8s-version-706028" is "Ready"
	I1123 10:09:42.012508  510025 node_ready.go:38] duration metric: took 6.125883767s for node "old-k8s-version-706028" to be "Ready" ...
	I1123 10:09:42.012523  510025 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:09:42.012590  510025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:09:44.653863  510025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.907055628s)
	I1123 10:09:45.467460  510025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.623967138s)
	I1123 10:09:46.053767  510025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.58565667s)
	I1123 10:09:46.053985  510025 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.041383108s)
	I1123 10:09:46.054014  510025 api_server.go:72] duration metric: took 10.847108011s to wait for apiserver process to appear ...
	I1123 10:09:46.054022  510025 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:09:46.054039  510025 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:09:46.056860  510025 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-706028 addons enable metrics-server
	
	I1123 10:09:46.059785  510025 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 10:09:44.569909  507023 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.248185167s
	I1123 10:09:47.270468  507023 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.94909917s
	I1123 10:09:48.823470  507023 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.501915291s
	I1123 10:09:48.847000  507023 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:09:48.867488  507023 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:09:48.883089  507023 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:09:48.883305  507023 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-020224 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:09:48.895711  507023 kubeadm.go:319] [bootstrap-token] Using token: 8qqp89.w1nl5taaj7197tdy
	I1123 10:09:48.898487  507023 out.go:252]   - Configuring RBAC rules ...
	I1123 10:09:48.898611  507023 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:09:48.903949  507023 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:09:48.912548  507023 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:09:48.916794  507023 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:09:48.923117  507023 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:09:48.927219  507023 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:09:49.231985  507023 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:09:49.660319  507023 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:09:50.232963  507023 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:09:50.234135  507023 kubeadm.go:319] 
	I1123 10:09:50.234220  507023 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:09:50.234231  507023 kubeadm.go:319] 
	I1123 10:09:50.234307  507023 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:09:50.234320  507023 kubeadm.go:319] 
	I1123 10:09:50.234346  507023 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:09:50.234409  507023 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:09:50.234463  507023 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:09:50.234471  507023 kubeadm.go:319] 
	I1123 10:09:50.234532  507023 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:09:50.234540  507023 kubeadm.go:319] 
	I1123 10:09:50.234588  507023 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:09:50.234595  507023 kubeadm.go:319] 
	I1123 10:09:50.234647  507023 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:09:50.234725  507023 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:09:50.234803  507023 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:09:50.234811  507023 kubeadm.go:319] 
	I1123 10:09:50.234895  507023 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:09:50.234971  507023 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:09:50.234975  507023 kubeadm.go:319] 
	I1123 10:09:50.235059  507023 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8qqp89.w1nl5taaj7197tdy \
	I1123 10:09:50.235162  507023 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e \
	I1123 10:09:50.235182  507023 kubeadm.go:319] 	--control-plane 
	I1123 10:09:50.235186  507023 kubeadm.go:319] 
	I1123 10:09:50.235270  507023 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:09:50.235276  507023 kubeadm.go:319] 
	I1123 10:09:50.235358  507023 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8qqp89.w1nl5taaj7197tdy \
	I1123 10:09:50.235461  507023 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e 
	I1123 10:09:50.239404  507023 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 10:09:50.239637  507023 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 10:09:50.239747  507023 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:09:50.239773  507023 cni.go:84] Creating CNI manager for ""
	I1123 10:09:50.239831  507023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:09:50.244833  507023 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:09:46.062661  510025 addons.go:530] duration metric: took 10.855424996s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 10:09:46.072442  510025 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:09:46.074321  510025 api_server.go:141] control plane version: v1.28.0
	I1123 10:09:46.074345  510025 api_server.go:131] duration metric: took 20.317301ms to wait for apiserver health ...
	I1123 10:09:46.074354  510025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:09:46.079068  510025 system_pods.go:59] 8 kube-system pods found
	I1123 10:09:46.079157  510025 system_pods.go:61] "coredns-5dd5756b68-h6b8n" [11c29962-a28a-4015-9014-96acb48fefc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:09:46.079182  510025 system_pods.go:61] "etcd-old-k8s-version-706028" [994d2bc9-8d4e-4211-a391-67531749ae73] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:09:46.079250  510025 system_pods.go:61] "kindnet-6l8w5" [3045e3bc-b846-45c6-a4ff-39e877bbf8ef] Running
	I1123 10:09:46.079279  510025 system_pods.go:61] "kube-apiserver-old-k8s-version-706028" [5fdf9127-966c-4a06-8fd6-4c3ae574b0a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:09:46.079314  510025 system_pods.go:61] "kube-controller-manager-old-k8s-version-706028" [5c49bac9-5830-437f-bf92-5caffda221fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:09:46.079340  510025 system_pods.go:61] "kube-proxy-s9rqv" [2aea0615-8684-4805-8c5d-f37fb042cc30] Running
	I1123 10:09:46.079362  510025 system_pods.go:61] "kube-scheduler-old-k8s-version-706028" [bd09b544-f854-4b15-a1ea-124bdfb16b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:09:46.079400  510025 system_pods.go:61] "storage-provisioner" [4bc52b3c-0d21-412d-bf6b-74f8dab91ac1] Running
	I1123 10:09:46.079427  510025 system_pods.go:74] duration metric: took 5.066869ms to wait for pod list to return data ...
	I1123 10:09:46.079451  510025 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:09:46.087656  510025 default_sa.go:45] found service account: "default"
	I1123 10:09:46.087729  510025 default_sa.go:55] duration metric: took 8.240045ms for default service account to be created ...
	I1123 10:09:46.087753  510025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:09:46.091525  510025 system_pods.go:86] 8 kube-system pods found
	I1123 10:09:46.091609  510025 system_pods.go:89] "coredns-5dd5756b68-h6b8n" [11c29962-a28a-4015-9014-96acb48fefc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:09:46.091635  510025 system_pods.go:89] "etcd-old-k8s-version-706028" [994d2bc9-8d4e-4211-a391-67531749ae73] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:09:46.091674  510025 system_pods.go:89] "kindnet-6l8w5" [3045e3bc-b846-45c6-a4ff-39e877bbf8ef] Running
	I1123 10:09:46.091701  510025 system_pods.go:89] "kube-apiserver-old-k8s-version-706028" [5fdf9127-966c-4a06-8fd6-4c3ae574b0a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:09:46.091722  510025 system_pods.go:89] "kube-controller-manager-old-k8s-version-706028" [5c49bac9-5830-437f-bf92-5caffda221fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:09:46.091757  510025 system_pods.go:89] "kube-proxy-s9rqv" [2aea0615-8684-4805-8c5d-f37fb042cc30] Running
	I1123 10:09:46.091786  510025 system_pods.go:89] "kube-scheduler-old-k8s-version-706028" [bd09b544-f854-4b15-a1ea-124bdfb16b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:09:46.091807  510025 system_pods.go:89] "storage-provisioner" [4bc52b3c-0d21-412d-bf6b-74f8dab91ac1] Running
	I1123 10:09:46.091844  510025 system_pods.go:126] duration metric: took 4.071472ms to wait for k8s-apps to be running ...
	I1123 10:09:46.091872  510025 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:09:46.091955  510025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:09:46.107867  510025 system_svc.go:56] duration metric: took 15.974153ms WaitForService to wait for kubelet
	I1123 10:09:46.107946  510025 kubeadm.go:587] duration metric: took 10.901047246s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:09:46.107988  510025 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:09:46.111285  510025 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:09:46.111365  510025 node_conditions.go:123] node cpu capacity is 2
	I1123 10:09:46.111408  510025 node_conditions.go:105] duration metric: took 3.378955ms to run NodePressure ...
	I1123 10:09:46.111440  510025 start.go:242] waiting for startup goroutines ...
	I1123 10:09:46.111464  510025 start.go:247] waiting for cluster config update ...
	I1123 10:09:46.111501  510025 start.go:256] writing updated cluster config ...
	I1123 10:09:46.111848  510025 ssh_runner.go:195] Run: rm -f paused
	I1123 10:09:46.116170  510025 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:09:46.123416  510025 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-h6b8n" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:09:48.129809  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:09:50.629071  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	I1123 10:09:50.247619  507023 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:09:50.251999  507023 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:09:50.252022  507023 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:09:50.265007  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:09:50.577755  507023 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:09:50.577898  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:50.577992  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-020224 minikube.k8s.io/updated_at=2025_11_23T10_09_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=no-preload-020224 minikube.k8s.io/primary=true
	I1123 10:09:50.716758  507023 ops.go:34] apiserver oom_adj: -16
	I1123 10:09:50.716862  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:51.217964  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:51.717695  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:52.216929  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:52.717348  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:53.217755  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:53.717021  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:54.217703  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:54.717265  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:54.814720  507023 kubeadm.go:1114] duration metric: took 4.236868611s to wait for elevateKubeSystemPrivileges
	I1123 10:09:54.814756  507023 kubeadm.go:403] duration metric: took 26.387618997s to StartCluster
	I1123 10:09:54.814775  507023 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:54.814861  507023 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:09:54.815840  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:54.816077  507023 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:09:54.816093  507023 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:09:54.816362  507023 config.go:182] Loaded profile config "no-preload-020224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:09:54.816410  507023 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:09:54.816483  507023 addons.go:70] Setting storage-provisioner=true in profile "no-preload-020224"
	I1123 10:09:54.816499  507023 addons.go:239] Setting addon storage-provisioner=true in "no-preload-020224"
	I1123 10:09:54.816526  507023 host.go:66] Checking if "no-preload-020224" exists ...
	I1123 10:09:54.816525  507023 addons.go:70] Setting default-storageclass=true in profile "no-preload-020224"
	I1123 10:09:54.816543  507023 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-020224"
	I1123 10:09:54.816857  507023 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:09:54.817006  507023 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:09:54.819245  507023 out.go:179] * Verifying Kubernetes components...
	I1123 10:09:54.822295  507023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:09:54.856610  507023 addons.go:239] Setting addon default-storageclass=true in "no-preload-020224"
	I1123 10:09:54.856652  507023 host.go:66] Checking if "no-preload-020224" exists ...
	I1123 10:09:54.857066  507023 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:09:54.868394  507023 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1123 10:09:52.630556  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:09:55.139752  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	I1123 10:09:54.873034  507023 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:09:54.873060  507023 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:09:54.873144  507023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:09:54.900717  507023 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:09:54.900740  507023 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:09:54.900803  507023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:09:54.923047  507023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:09:54.943020  507023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:09:55.227160  507023 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:09:55.345474  507023 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:09:55.345586  507023 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:09:55.371195  507023 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:09:56.298483  507023 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.071290237s)
	I1123 10:09:56.299413  507023 node_ready.go:35] waiting up to 6m0s for node "no-preload-020224" to be "Ready" ...
	I1123 10:09:56.299703  507023 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 10:09:56.359702  507023 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 10:09:57.631453  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:09:59.647935  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	I1123 10:09:56.362681  507023 addons.go:530] duration metric: took 1.546265819s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:09:56.812226  507023 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-020224" context rescaled to 1 replicas
	W1123 10:09:58.303234  507023 node_ready.go:57] node "no-preload-020224" has "Ready":"False" status (will retry)
	W1123 10:10:00.312323  507023 node_ready.go:57] node "no-preload-020224" has "Ready":"False" status (will retry)
	W1123 10:10:02.133676  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:04.628699  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:02.809713  507023 node_ready.go:57] node "no-preload-020224" has "Ready":"False" status (will retry)
	W1123 10:10:05.303025  507023 node_ready.go:57] node "no-preload-020224" has "Ready":"False" status (will retry)
	W1123 10:10:06.630741  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:09.129396  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:07.805356  507023 node_ready.go:57] node "no-preload-020224" has "Ready":"False" status (will retry)
	W1123 10:10:09.806317  507023 node_ready.go:57] node "no-preload-020224" has "Ready":"False" status (will retry)
	I1123 10:10:11.803086  507023 node_ready.go:49] node "no-preload-020224" is "Ready"
	I1123 10:10:11.803113  507023 node_ready.go:38] duration metric: took 15.50367439s for node "no-preload-020224" to be "Ready" ...
	I1123 10:10:11.803127  507023 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:10:11.803184  507023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:10:11.834253  507023 api_server.go:72] duration metric: took 17.018129095s to wait for apiserver process to appear ...
	I1123 10:10:11.834277  507023 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:10:11.834296  507023 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:10:11.849038  507023 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:10:11.850114  507023 api_server.go:141] control plane version: v1.34.1
	I1123 10:10:11.850135  507023 api_server.go:131] duration metric: took 15.851432ms to wait for apiserver health ...
	I1123 10:10:11.850143  507023 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:10:11.852805  507023 system_pods.go:59] 8 kube-system pods found
	I1123 10:10:11.852831  507023 system_pods.go:61] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:11.852838  507023 system_pods.go:61] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running
	I1123 10:10:11.852843  507023 system_pods.go:61] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:11.852847  507023 system_pods.go:61] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running
	I1123 10:10:11.852851  507023 system_pods.go:61] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running
	I1123 10:10:11.852855  507023 system_pods.go:61] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:11.852858  507023 system_pods.go:61] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running
	I1123 10:10:11.852863  507023 system_pods.go:61] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:10:11.852869  507023 system_pods.go:74] duration metric: took 2.720269ms to wait for pod list to return data ...
	I1123 10:10:11.852877  507023 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:10:11.855391  507023 default_sa.go:45] found service account: "default"
	I1123 10:10:11.855410  507023 default_sa.go:55] duration metric: took 2.527075ms for default service account to be created ...
	I1123 10:10:11.855418  507023 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:10:11.861325  507023 system_pods.go:86] 8 kube-system pods found
	I1123 10:10:11.861459  507023 system_pods.go:89] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:11.861500  507023 system_pods.go:89] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running
	I1123 10:10:11.861516  507023 system_pods.go:89] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:11.861523  507023 system_pods.go:89] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running
	I1123 10:10:11.861528  507023 system_pods.go:89] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running
	I1123 10:10:11.861534  507023 system_pods.go:89] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:11.861538  507023 system_pods.go:89] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running
	I1123 10:10:11.861557  507023 system_pods.go:89] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:10:11.861581  507023 retry.go:31] will retry after 233.21626ms: missing components: kube-dns
	I1123 10:10:12.098878  507023 system_pods.go:86] 8 kube-system pods found
	I1123 10:10:12.098918  507023 system_pods.go:89] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:12.098926  507023 system_pods.go:89] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running
	I1123 10:10:12.098932  507023 system_pods.go:89] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:12.098959  507023 system_pods.go:89] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running
	I1123 10:10:12.098970  507023 system_pods.go:89] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running
	I1123 10:10:12.098974  507023 system_pods.go:89] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:12.098978  507023 system_pods.go:89] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running
	I1123 10:10:12.098987  507023 system_pods.go:89] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:10:12.099006  507023 retry.go:31] will retry after 381.18787ms: missing components: kube-dns
	I1123 10:10:12.483736  507023 system_pods.go:86] 8 kube-system pods found
	I1123 10:10:12.483772  507023 system_pods.go:89] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:12.483779  507023 system_pods.go:89] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running
	I1123 10:10:12.483796  507023 system_pods.go:89] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:12.483802  507023 system_pods.go:89] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running
	I1123 10:10:12.483807  507023 system_pods.go:89] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running
	I1123 10:10:12.483811  507023 system_pods.go:89] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:12.483814  507023 system_pods.go:89] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running
	I1123 10:10:12.483820  507023 system_pods.go:89] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:10:12.483834  507023 retry.go:31] will retry after 370.76949ms: missing components: kube-dns
	I1123 10:10:12.858596  507023 system_pods.go:86] 8 kube-system pods found
	I1123 10:10:12.858630  507023 system_pods.go:89] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:12.858638  507023 system_pods.go:89] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running
	I1123 10:10:12.858644  507023 system_pods.go:89] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:12.858648  507023 system_pods.go:89] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running
	I1123 10:10:12.858656  507023 system_pods.go:89] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running
	I1123 10:10:12.858660  507023 system_pods.go:89] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:12.858664  507023 system_pods.go:89] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running
	I1123 10:10:12.858670  507023 system_pods.go:89] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:10:12.858690  507023 retry.go:31] will retry after 522.823077ms: missing components: kube-dns
	I1123 10:10:13.385899  507023 system_pods.go:86] 8 kube-system pods found
	I1123 10:10:13.385933  507023 system_pods.go:89] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Running
	I1123 10:10:13.385940  507023 system_pods.go:89] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running
	I1123 10:10:13.385945  507023 system_pods.go:89] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:13.385950  507023 system_pods.go:89] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running
	I1123 10:10:13.385955  507023 system_pods.go:89] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running
	I1123 10:10:13.385959  507023 system_pods.go:89] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:13.385963  507023 system_pods.go:89] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running
	I1123 10:10:13.385967  507023 system_pods.go:89] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Running
	I1123 10:10:13.385974  507023 system_pods.go:126] duration metric: took 1.530550599s to wait for k8s-apps to be running ...
	I1123 10:10:13.385986  507023 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:10:13.386043  507023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:10:13.401187  507023 system_svc.go:56] duration metric: took 15.19181ms WaitForService to wait for kubelet
	I1123 10:10:13.401215  507023 kubeadm.go:587] duration metric: took 18.585095156s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:10:13.401232  507023 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:10:13.404855  507023 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:10:13.404884  507023 node_conditions.go:123] node cpu capacity is 2
	I1123 10:10:13.404897  507023 node_conditions.go:105] duration metric: took 3.659936ms to run NodePressure ...
	I1123 10:10:13.404911  507023 start.go:242] waiting for startup goroutines ...
	I1123 10:10:13.404918  507023 start.go:247] waiting for cluster config update ...
	I1123 10:10:13.404929  507023 start.go:256] writing updated cluster config ...
	I1123 10:10:13.405217  507023 ssh_runner.go:195] Run: rm -f paused
	I1123 10:10:13.412081  507023 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:10:13.415690  507023 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v59bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.421477  507023 pod_ready.go:94] pod "coredns-66bc5c9577-v59bz" is "Ready"
	I1123 10:10:13.421509  507023 pod_ready.go:86] duration metric: took 5.79196ms for pod "coredns-66bc5c9577-v59bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.424109  507023 pod_ready.go:83] waiting for pod "etcd-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.428887  507023 pod_ready.go:94] pod "etcd-no-preload-020224" is "Ready"
	I1123 10:10:13.428913  507023 pod_ready.go:86] duration metric: took 4.780094ms for pod "etcd-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.431451  507023 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.438399  507023 pod_ready.go:94] pod "kube-apiserver-no-preload-020224" is "Ready"
	I1123 10:10:13.438510  507023 pod_ready.go:86] duration metric: took 7.032745ms for pod "kube-apiserver-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.448875  507023 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.815948  507023 pod_ready.go:94] pod "kube-controller-manager-no-preload-020224" is "Ready"
	I1123 10:10:13.815975  507023 pod_ready.go:86] duration metric: took 367.025898ms for pod "kube-controller-manager-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:14.016632  507023 pod_ready.go:83] waiting for pod "kube-proxy-7s6pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:14.415916  507023 pod_ready.go:94] pod "kube-proxy-7s6pf" is "Ready"
	I1123 10:10:14.415991  507023 pod_ready.go:86] duration metric: took 399.329072ms for pod "kube-proxy-7s6pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:14.617741  507023 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:15.027806  507023 pod_ready.go:94] pod "kube-scheduler-no-preload-020224" is "Ready"
	I1123 10:10:15.027833  507023 pod_ready.go:86] duration metric: took 410.055337ms for pod "kube-scheduler-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:15.027848  507023 pod_ready.go:40] duration metric: took 1.615729566s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:10:15.102996  507023 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:10:15.108795  507023 out.go:179] * Done! kubectl is now configured to use "no-preload-020224" cluster and "default" namespace by default
	W1123 10:10:11.129640  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:13.629506  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:16.128952  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:18.129394  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:20.630325  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	I1123 10:10:23.129810  510025 pod_ready.go:94] pod "coredns-5dd5756b68-h6b8n" is "Ready"
	I1123 10:10:23.129839  510025 pod_ready.go:86] duration metric: took 37.006351007s for pod "coredns-5dd5756b68-h6b8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.133135  510025 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.138151  510025 pod_ready.go:94] pod "etcd-old-k8s-version-706028" is "Ready"
	I1123 10:10:23.138175  510025 pod_ready.go:86] duration metric: took 5.008988ms for pod "etcd-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.141508  510025 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.146675  510025 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-706028" is "Ready"
	I1123 10:10:23.146701  510025 pod_ready.go:86] duration metric: took 5.169418ms for pod "kube-apiserver-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.149711  510025 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.328518  510025 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-706028" is "Ready"
	I1123 10:10:23.328544  510025 pod_ready.go:86] duration metric: took 178.810924ms for pod "kube-controller-manager-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.527352  510025 pod_ready.go:83] waiting for pod "kube-proxy-s9rqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.927131  510025 pod_ready.go:94] pod "kube-proxy-s9rqv" is "Ready"
	I1123 10:10:23.927158  510025 pod_ready.go:86] duration metric: took 399.732025ms for pod "kube-proxy-s9rqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:24.128621  510025 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:24.527488  510025 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-706028" is "Ready"
	I1123 10:10:24.527514  510025 pod_ready.go:86] duration metric: took 398.85625ms for pod "kube-scheduler-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:24.527526  510025 pod_ready.go:40] duration metric: took 38.411281224s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:10:24.613564  510025 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 10:10:24.616787  510025 out.go:203] 
	W1123 10:10:24.619625  510025 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 10:10:24.622595  510025 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 10:10:24.625465  510025 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-706028" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:10:12 no-preload-020224 crio[833]: time="2025-11-23T10:10:12.015496078Z" level=info msg="Created container 190d577e92ff4d10c26651204c843e2279145a18a64103a19a8811bf3e225fc1: kube-system/coredns-66bc5c9577-v59bz/coredns" id=fcb37628-c0ab-4572-b091-6d9538d8708b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:10:12 no-preload-020224 crio[833]: time="2025-11-23T10:10:12.016783386Z" level=info msg="Starting container: 190d577e92ff4d10c26651204c843e2279145a18a64103a19a8811bf3e225fc1" id=3282eaa5-8740-457b-b06b-f62077cfbbd9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:10:12 no-preload-020224 crio[833]: time="2025-11-23T10:10:12.0201233Z" level=info msg="Started container" PID=2483 containerID=190d577e92ff4d10c26651204c843e2279145a18a64103a19a8811bf3e225fc1 description=kube-system/coredns-66bc5c9577-v59bz/coredns id=3282eaa5-8740-457b-b06b-f62077cfbbd9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7eb8fb9e05ab60790a1f2ad716bf6b1fa7d3122dcbbc6f33d84354210d0ee7c7
	Nov 23 10:10:15 no-preload-020224 crio[833]: time="2025-11-23T10:10:15.628162273Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c925e125-6c38-4fe3-bc7c-1c81a53bcc46 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:10:15 no-preload-020224 crio[833]: time="2025-11-23T10:10:15.628236728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:10:15 no-preload-020224 crio[833]: time="2025-11-23T10:10:15.634110536Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d778d0c695c288670f8b0f21b88adc70db785a4f1e17955b3c6b0d358c43536e UID:6365a14a-d665-4e48-8060-59665b080967 NetNS:/var/run/netns/b147caca-8687-4f88-8e7a-6dc8610cccc8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000170f48}] Aliases:map[]}"
	Nov 23 10:10:15 no-preload-020224 crio[833]: time="2025-11-23T10:10:15.634262194Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 10:10:15 no-preload-020224 crio[833]: time="2025-11-23T10:10:15.64533751Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d778d0c695c288670f8b0f21b88adc70db785a4f1e17955b3c6b0d358c43536e UID:6365a14a-d665-4e48-8060-59665b080967 NetNS:/var/run/netns/b147caca-8687-4f88-8e7a-6dc8610cccc8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000170f48}] Aliases:map[]}"
	Nov 23 10:10:15 no-preload-020224 crio[833]: time="2025-11-23T10:10:15.646189545Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 10:10:15 no-preload-020224 crio[833]: time="2025-11-23T10:10:15.649628128Z" level=info msg="Ran pod sandbox d778d0c695c288670f8b0f21b88adc70db785a4f1e17955b3c6b0d358c43536e with infra container: default/busybox/POD" id=c925e125-6c38-4fe3-bc7c-1c81a53bcc46 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:10:15 no-preload-020224 crio[833]: time="2025-11-23T10:10:15.652072354Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=30348e74-62ee-4ab7-92db-0603ef01088a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:10:15 no-preload-020224 crio[833]: time="2025-11-23T10:10:15.652459553Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=30348e74-62ee-4ab7-92db-0603ef01088a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:10:15 no-preload-020224 crio[833]: time="2025-11-23T10:10:15.652677452Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=30348e74-62ee-4ab7-92db-0603ef01088a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:10:15 no-preload-020224 crio[833]: time="2025-11-23T10:10:15.655053927Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2dbcfa2a-d777-4e02-a7e6-27608166c6fe name=/runtime.v1.ImageService/PullImage
	Nov 23 10:10:15 no-preload-020224 crio[833]: time="2025-11-23T10:10:15.65899738Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:10:17 no-preload-020224 crio[833]: time="2025-11-23T10:10:17.769300025Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=2dbcfa2a-d777-4e02-a7e6-27608166c6fe name=/runtime.v1.ImageService/PullImage
	Nov 23 10:10:17 no-preload-020224 crio[833]: time="2025-11-23T10:10:17.769845306Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c1371633-823b-4932-98e4-af4ca503e2fc name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:10:17 no-preload-020224 crio[833]: time="2025-11-23T10:10:17.771537495Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b386c3a9-b5a6-4b23-809c-695b41f7347f name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:10:17 no-preload-020224 crio[833]: time="2025-11-23T10:10:17.777326189Z" level=info msg="Creating container: default/busybox/busybox" id=0dafaa0b-4452-4ae6-ba4e-812ab5150081 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:10:17 no-preload-020224 crio[833]: time="2025-11-23T10:10:17.777484313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:10:17 no-preload-020224 crio[833]: time="2025-11-23T10:10:17.782262217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:10:17 no-preload-020224 crio[833]: time="2025-11-23T10:10:17.782781725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:10:17 no-preload-020224 crio[833]: time="2025-11-23T10:10:17.812973907Z" level=info msg="Created container 8a60f473feaae46fad9206c7ef1b7f5fbe6496763596c64ec9a47047fbf74a30: default/busybox/busybox" id=0dafaa0b-4452-4ae6-ba4e-812ab5150081 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:10:17 no-preload-020224 crio[833]: time="2025-11-23T10:10:17.816124501Z" level=info msg="Starting container: 8a60f473feaae46fad9206c7ef1b7f5fbe6496763596c64ec9a47047fbf74a30" id=b0b42a0e-48f8-44fd-ada8-328a55379b69 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:10:17 no-preload-020224 crio[833]: time="2025-11-23T10:10:17.819243644Z" level=info msg="Started container" PID=2536 containerID=8a60f473feaae46fad9206c7ef1b7f5fbe6496763596c64ec9a47047fbf74a30 description=default/busybox/busybox id=b0b42a0e-48f8-44fd-ada8-328a55379b69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d778d0c695c288670f8b0f21b88adc70db785a4f1e17955b3c6b0d358c43536e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8a60f473feaae       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   d778d0c695c28       busybox                                     default
	190d577e92ff4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   7eb8fb9e05ab6       coredns-66bc5c9577-v59bz                    kube-system
	d0f8cd30a5c4a       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   ddeaa8a5efc1a       storage-provisioner                         kube-system
	881b983b68bb9       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   f38a8e88003b8       kindnet-ghq9t                               kube-system
	b7e0523c3f6ab       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      29 seconds ago      Running             kube-proxy                0                   f27d37000728c       kube-proxy-7s6pf                            kube-system
	4488719d3b38d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      46 seconds ago      Running             kube-scheduler            0                   c0747ce056023       kube-scheduler-no-preload-020224            kube-system
	1e0be521ce426       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      46 seconds ago      Running             etcd                      0                   09a900cadfc8f       etcd-no-preload-020224                      kube-system
	2addf2f3aa0f3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      46 seconds ago      Running             kube-apiserver            0                   b2d9079893205       kube-apiserver-no-preload-020224            kube-system
	4b3bad5ed201d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      46 seconds ago      Running             kube-controller-manager   0                   0ff266a6ad812       kube-controller-manager-no-preload-020224   kube-system
	
	
	==> coredns [190d577e92ff4d10c26651204c843e2279145a18a64103a19a8811bf3e225fc1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46499 - 4337 "HINFO IN 5319623579520033823.4731259772144945685. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01410005s
	
	
	==> describe nodes <==
	Name:               no-preload-020224
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-020224
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=no-preload-020224
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_09_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:09:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-020224
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:10:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:10:20 +0000   Sun, 23 Nov 2025 10:09:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:10:20 +0000   Sun, 23 Nov 2025 10:09:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:10:20 +0000   Sun, 23 Nov 2025 10:09:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:10:20 +0000   Sun, 23 Nov 2025 10:10:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-020224
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                57e370ae-7663-48e3-a7c6-52885f59b718
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-v59bz                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-no-preload-020224                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         37s
	  kube-system                 kindnet-ghq9t                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-020224             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-no-preload-020224    200m (10%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-7s6pf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-020224             100m (5%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node no-preload-020224 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node no-preload-020224 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node no-preload-020224 status is now: NodeHasSufficientPID
	  Normal   Starting                 37s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 37s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  37s                kubelet          Node no-preload-020224 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    37s                kubelet          Node no-preload-020224 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     37s                kubelet          Node no-preload-020224 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node no-preload-020224 event: Registered Node no-preload-020224 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-020224 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 09:46] overlayfs: idmapped layers are currently not supported
	[ +17.278795] overlayfs: idmapped layers are currently not supported
	[Nov23 09:47] overlayfs: idmapped layers are currently not supported
	[ +12.563591] hrtimer: interrupt took 4093727 ns
	[ +14.190024] overlayfs: idmapped layers are currently not supported
	[Nov23 09:49] overlayfs: idmapped layers are currently not supported
	[Nov23 09:50] overlayfs: idmapped layers are currently not supported
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1e0be521ce4265048375eb6943f9aa61f51c604e595c742224f19e7d58df51ff] <==
	{"level":"warn","ts":"2025-11-23T10:09:45.786617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:45.843223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:45.844489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:45.878914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:45.899949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:45.948936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:45.955522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:45.973638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.005353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.058675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.081801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.107534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.136527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.146882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.162793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.178678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.194666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.209534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.225919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.240335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.261826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.288858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.302340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.317595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:09:46.392584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48636","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:10:26 up  2:52,  0 user,  load average: 5.74, 4.59, 3.40
	Linux no-preload-020224 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [881b983b68bb9fcb1cb55f9aa2db7d101bdfd2d37c79c7e2127f91da6bf15e38] <==
	I1123 10:10:00.967131       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:10:00.967509       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 10:10:00.967682       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:10:01.057807       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:10:01.057949       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:10:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:10:01.265386       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:10:01.265477       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:10:01.265512       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:10:01.268227       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:10:01.468703       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:10:01.469083       1 metrics.go:72] Registering metrics
	I1123 10:10:01.469239       1 controller.go:711] "Syncing nftables rules"
	I1123 10:10:11.272058       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:10:11.272108       1 main.go:301] handling current node
	I1123 10:10:21.265902       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:10:21.265974       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2addf2f3aa0f3a20f086282f981a0a086407a1a4511c469bb86c44778bc3686c] <==
	E1123 10:09:47.282568       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1123 10:09:47.326839       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:09:47.346135       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 10:09:47.349547       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:09:47.374709       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:09:47.374846       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:09:47.494470       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:09:47.919179       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:09:47.924433       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:09:47.924454       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:09:48.641212       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:09:48.700628       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:09:48.842262       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:09:48.853688       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 10:09:48.854695       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:09:48.866053       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:09:49.124244       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:09:49.643453       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:09:49.658866       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:09:49.673253       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:09:54.186648       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:09:54.191292       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:09:55.152339       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:09:55.191716       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 10:10:24.450237       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:42340: use of closed network connection
	
	
	==> kube-controller-manager [4b3bad5ed201d47009f0566c0fda63273f81765272ad2860c0cad0fa59fa2c16] <==
	I1123 10:09:54.129976       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:09:54.129990       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:09:54.130613       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:09:54.131360       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:09:54.135268       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:09:54.137332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:09:54.137757       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:09:54.138623       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 10:09:54.138645       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:09:54.141275       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:09:54.147212       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:09:54.148136       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-020224" podCIDRs=["10.244.0.0/24"]
	I1123 10:09:54.149583       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:09:54.158631       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 10:09:54.172716       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:09:54.172843       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:09:54.172894       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:09:54.173024       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 10:09:54.173077       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:09:54.173775       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:09:54.195284       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:09:54.220913       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:09:54.221044       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 10:09:54.221092       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 10:10:14.082768       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b7e0523c3f6abce68450ec5630095fad5123d021c6e69df1fdd1528316828c0f] <==
	I1123 10:09:56.824000       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:09:56.918939       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:09:57.019113       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:09:57.019147       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 10:09:57.019217       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:09:57.059258       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:09:57.059313       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:09:57.063978       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:09:57.064264       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:09:57.064282       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:09:57.065379       1 config.go:200] "Starting service config controller"
	I1123 10:09:57.065575       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:09:57.065983       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:09:57.066068       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:09:57.066137       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:09:57.066167       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:09:57.066875       1 config.go:309] "Starting node config controller"
	I1123 10:09:57.066931       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:09:57.066960       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:09:57.165753       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:09:57.167567       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:09:57.167605       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4488719d3b38d8c299f9e3605c33d03f7eeb1ecf5aa4c6633ca591e1dd2bb346] <==
	E1123 10:09:47.257931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:09:47.258032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:09:47.258089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 10:09:47.258152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:09:47.258223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:09:47.258275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:09:47.258325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:09:47.258372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:09:47.258451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:09:47.258541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:09:47.258602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:09:47.258681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:09:47.258724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:09:47.258790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:09:47.258905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:09:47.259016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:09:48.093636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:09:48.096886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:09:48.146949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:09:48.166263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 10:09:48.193468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:09:48.203940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:09:48.210640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:09:48.308687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1123 10:09:51.143924       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:09:50 no-preload-020224 kubelet[1994]: I1123 10:09:50.872563    1994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-020224" podStartSLOduration=1.872543437 podStartE2EDuration="1.872543437s" podCreationTimestamp="2025-11-23 10:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:09:50.85907647 +0000 UTC m=+1.288598398" watchObservedRunningTime="2025-11-23 10:09:50.872543437 +0000 UTC m=+1.302065381"
	Nov 23 10:09:54 no-preload-020224 kubelet[1994]: I1123 10:09:54.157764    1994 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 10:09:54 no-preload-020224 kubelet[1994]: I1123 10:09:54.158883    1994 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 10:09:55 no-preload-020224 kubelet[1994]: I1123 10:09:55.487031    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a82575e8-2a03-4722-8611-dab3ceda4f39-cni-cfg\") pod \"kindnet-ghq9t\" (UID: \"a82575e8-2a03-4722-8611-dab3ceda4f39\") " pod="kube-system/kindnet-ghq9t"
	Nov 23 10:09:55 no-preload-020224 kubelet[1994]: I1123 10:09:55.487139    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a82575e8-2a03-4722-8611-dab3ceda4f39-lib-modules\") pod \"kindnet-ghq9t\" (UID: \"a82575e8-2a03-4722-8611-dab3ceda4f39\") " pod="kube-system/kindnet-ghq9t"
	Nov 23 10:09:55 no-preload-020224 kubelet[1994]: I1123 10:09:55.487165    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrpjn\" (UniqueName: \"kubernetes.io/projected/a82575e8-2a03-4722-8611-dab3ceda4f39-kube-api-access-mrpjn\") pod \"kindnet-ghq9t\" (UID: \"a82575e8-2a03-4722-8611-dab3ceda4f39\") " pod="kube-system/kindnet-ghq9t"
	Nov 23 10:09:55 no-preload-020224 kubelet[1994]: I1123 10:09:55.487233    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54924ab5-094f-48de-8483-f31455e53773-lib-modules\") pod \"kube-proxy-7s6pf\" (UID: \"54924ab5-094f-48de-8483-f31455e53773\") " pod="kube-system/kube-proxy-7s6pf"
	Nov 23 10:09:55 no-preload-020224 kubelet[1994]: I1123 10:09:55.487257    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/54924ab5-094f-48de-8483-f31455e53773-kube-proxy\") pod \"kube-proxy-7s6pf\" (UID: \"54924ab5-094f-48de-8483-f31455e53773\") " pod="kube-system/kube-proxy-7s6pf"
	Nov 23 10:09:55 no-preload-020224 kubelet[1994]: I1123 10:09:55.487273    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54924ab5-094f-48de-8483-f31455e53773-xtables-lock\") pod \"kube-proxy-7s6pf\" (UID: \"54924ab5-094f-48de-8483-f31455e53773\") " pod="kube-system/kube-proxy-7s6pf"
	Nov 23 10:09:55 no-preload-020224 kubelet[1994]: I1123 10:09:55.487364    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhxth\" (UniqueName: \"kubernetes.io/projected/54924ab5-094f-48de-8483-f31455e53773-kube-api-access-zhxth\") pod \"kube-proxy-7s6pf\" (UID: \"54924ab5-094f-48de-8483-f31455e53773\") " pod="kube-system/kube-proxy-7s6pf"
	Nov 23 10:09:55 no-preload-020224 kubelet[1994]: I1123 10:09:55.487383    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a82575e8-2a03-4722-8611-dab3ceda4f39-xtables-lock\") pod \"kindnet-ghq9t\" (UID: \"a82575e8-2a03-4722-8611-dab3ceda4f39\") " pod="kube-system/kindnet-ghq9t"
	Nov 23 10:09:55 no-preload-020224 kubelet[1994]: E1123 10:09:55.500976    1994 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-020224\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-020224' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 23 10:09:55 no-preload-020224 kubelet[1994]: E1123 10:09:55.501056    1994 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-ghq9t\" is forbidden: User \"system:node:no-preload-020224\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-020224' and this object" podUID="a82575e8-2a03-4722-8611-dab3ceda4f39" pod="kube-system/kindnet-ghq9t"
	Nov 23 10:09:56 no-preload-020224 kubelet[1994]: I1123 10:09:56.508695    1994 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 10:09:56 no-preload-020224 kubelet[1994]: I1123 10:09:56.853866    1994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7s6pf" podStartSLOduration=1.853848301 podStartE2EDuration="1.853848301s" podCreationTimestamp="2025-11-23 10:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:09:56.853222182 +0000 UTC m=+7.282744126" watchObservedRunningTime="2025-11-23 10:09:56.853848301 +0000 UTC m=+7.283370229"
	Nov 23 10:10:11 no-preload-020224 kubelet[1994]: I1123 10:10:11.576554    1994 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 10:10:11 no-preload-020224 kubelet[1994]: I1123 10:10:11.607380    1994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ghq9t" podStartSLOduration=12.450928008 podStartE2EDuration="16.607353036s" podCreationTimestamp="2025-11-23 10:09:55 +0000 UTC" firstStartedPulling="2025-11-23 10:09:56.69123646 +0000 UTC m=+7.120758387" lastFinishedPulling="2025-11-23 10:10:00.847661479 +0000 UTC m=+11.277183415" observedRunningTime="2025-11-23 10:10:01.937167185 +0000 UTC m=+12.366689211" watchObservedRunningTime="2025-11-23 10:10:11.607353036 +0000 UTC m=+22.036874964"
	Nov 23 10:10:11 no-preload-020224 kubelet[1994]: I1123 10:10:11.714761    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f27fn\" (UniqueName: \"kubernetes.io/projected/6796ee0a-02e3-4c46-a03b-115136ad2780-kube-api-access-f27fn\") pod \"storage-provisioner\" (UID: \"6796ee0a-02e3-4c46-a03b-115136ad2780\") " pod="kube-system/storage-provisioner"
	Nov 23 10:10:11 no-preload-020224 kubelet[1994]: I1123 10:10:11.714809    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdczn\" (UniqueName: \"kubernetes.io/projected/9cd5752f-f6a3-4db9-a644-1c18ff268642-kube-api-access-hdczn\") pod \"coredns-66bc5c9577-v59bz\" (UID: \"9cd5752f-f6a3-4db9-a644-1c18ff268642\") " pod="kube-system/coredns-66bc5c9577-v59bz"
	Nov 23 10:10:11 no-preload-020224 kubelet[1994]: I1123 10:10:11.714837    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cd5752f-f6a3-4db9-a644-1c18ff268642-config-volume\") pod \"coredns-66bc5c9577-v59bz\" (UID: \"9cd5752f-f6a3-4db9-a644-1c18ff268642\") " pod="kube-system/coredns-66bc5c9577-v59bz"
	Nov 23 10:10:11 no-preload-020224 kubelet[1994]: I1123 10:10:11.714860    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6796ee0a-02e3-4c46-a03b-115136ad2780-tmp\") pod \"storage-provisioner\" (UID: \"6796ee0a-02e3-4c46-a03b-115136ad2780\") " pod="kube-system/storage-provisioner"
	Nov 23 10:10:11 no-preload-020224 kubelet[1994]: W1123 10:10:11.956512    1994 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/crio-7eb8fb9e05ab60790a1f2ad716bf6b1fa7d3122dcbbc6f33d84354210d0ee7c7 WatchSource:0}: Error finding container 7eb8fb9e05ab60790a1f2ad716bf6b1fa7d3122dcbbc6f33d84354210d0ee7c7: Status 404 returned error can't find the container with id 7eb8fb9e05ab60790a1f2ad716bf6b1fa7d3122dcbbc6f33d84354210d0ee7c7
	Nov 23 10:10:12 no-preload-020224 kubelet[1994]: I1123 10:10:12.978710    1994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-v59bz" podStartSLOduration=17.978691243 podStartE2EDuration="17.978691243s" podCreationTimestamp="2025-11-23 10:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:10:12.964647744 +0000 UTC m=+23.394169688" watchObservedRunningTime="2025-11-23 10:10:12.978691243 +0000 UTC m=+23.408213179"
	Nov 23 10:10:12 no-preload-020224 kubelet[1994]: I1123 10:10:12.993099    1994 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.993076639 podStartE2EDuration="16.993076639s" podCreationTimestamp="2025-11-23 10:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:10:12.979321753 +0000 UTC m=+23.408843697" watchObservedRunningTime="2025-11-23 10:10:12.993076639 +0000 UTC m=+23.422598575"
	Nov 23 10:10:15 no-preload-020224 kubelet[1994]: I1123 10:10:15.438193    1994 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms25s\" (UniqueName: \"kubernetes.io/projected/6365a14a-d665-4e48-8060-59665b080967-kube-api-access-ms25s\") pod \"busybox\" (UID: \"6365a14a-d665-4e48-8060-59665b080967\") " pod="default/busybox"
	
	
	==> storage-provisioner [d0f8cd30a5c4a9ab44fb7ac78ec0dc75faf499f49073b05395dea571c0e7c48d] <==
	I1123 10:10:12.000262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:10:12.021741       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:10:12.021874       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:10:12.026283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:12.037175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:10:12.037537       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:10:12.037881       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-020224_ab1ae806-2a82-4943-bd71-8f5438bcc495!
	I1123 10:10:12.039685       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"160d8384-48d9-41be-8c08-06b5acefeeea", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-020224_ab1ae806-2a82-4943-bd71-8f5438bcc495 became leader
	W1123 10:10:12.046038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:12.053563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:10:12.138597       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-020224_ab1ae806-2a82-4943-bd71-8f5438bcc495!
	W1123 10:10:14.057347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:14.064804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:16.068619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:16.074303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:18.077342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:18.082119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:20.085554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:20.090860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:22.094233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:22.098687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:24.101592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:24.106179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:26.109358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:10:26.117834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-020224 -n no-preload-020224
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-020224 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-706028 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-706028 --alsologtostderr -v=1: exit status 80 (1.880333028s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-706028 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:10:36.565009  513907 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:10:36.565829  513907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:36.565848  513907 out.go:374] Setting ErrFile to fd 2...
	I1123 10:10:36.565854  513907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:36.566116  513907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:10:36.566402  513907 out.go:368] Setting JSON to false
	I1123 10:10:36.566428  513907 mustload.go:66] Loading cluster: old-k8s-version-706028
	I1123 10:10:36.566860  513907 config.go:182] Loaded profile config "old-k8s-version-706028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:10:36.567351  513907 cli_runner.go:164] Run: docker container inspect old-k8s-version-706028 --format={{.State.Status}}
	I1123 10:10:36.584947  513907 host.go:66] Checking if "old-k8s-version-706028" exists ...
	I1123 10:10:36.585309  513907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:10:36.649170  513907 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-23 10:10:36.639053524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:10:36.650008  513907 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-706028 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 10:10:36.653392  513907 out.go:179] * Pausing node old-k8s-version-706028 ... 
	I1123 10:10:36.656283  513907 host.go:66] Checking if "old-k8s-version-706028" exists ...
	I1123 10:10:36.656633  513907 ssh_runner.go:195] Run: systemctl --version
	I1123 10:10:36.656698  513907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:10:36.674190  513907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:10:36.780046  513907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:10:36.792599  513907 pause.go:52] kubelet running: true
	I1123 10:10:36.792671  513907 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:10:37.009924  513907 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:10:37.010042  513907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:10:37.085499  513907 cri.go:89] found id: "9bf3dd205682ea3296e952ceb1dadbbe4532b2c1e06757abe529e2af9a50d562"
	I1123 10:10:37.085523  513907 cri.go:89] found id: "1bf03ed5a3dee20793e8e504c18ad29f97cbbd2454a960d77c4e4dfe52e1dde9"
	I1123 10:10:37.085528  513907 cri.go:89] found id: "f8f5c2f8b84b2f925f1dac344595832b43b0211b004448a1db7b9c23faf52228"
	I1123 10:10:37.085531  513907 cri.go:89] found id: "b44546f54a873112a74f2a82e7c9a205fd2e9c0e40cacf6ffa55b2b473ef0d36"
	I1123 10:10:37.085535  513907 cri.go:89] found id: "828cd3adcf6b5681aa7d384f69cb7566664e59a1ab84ee837327f44e3e645dfc"
	I1123 10:10:37.085538  513907 cri.go:89] found id: "34ee70a0be166e12e57fb579eaa0cb22b8873a626bdc6ae8d83d81bfcbff7280"
	I1123 10:10:37.085541  513907 cri.go:89] found id: "98f50d387d5b2fded7f07e260ceb83bce5a609dc2bd07303f78f93578f6d82ed"
	I1123 10:10:37.085544  513907 cri.go:89] found id: "676b2dbee75eee912c3a604195863ba16974dcbd9b686ff17513a405a42b3e91"
	I1123 10:10:37.085547  513907 cri.go:89] found id: "ea67be45b14c0ca0ac41632b23ebd8095b8b2a16235fddfd8d5a4b1519577720"
	I1123 10:10:37.085553  513907 cri.go:89] found id: "c5ec8602847c185ff0bd5b175bcda823368a9380dc871c5be2c3ac84fd5e4292"
	I1123 10:10:37.085556  513907 cri.go:89] found id: "02db52ed7a4e551150e8645311a4cfd60769ef5552d108ac02a63489a373aba2"
	I1123 10:10:37.085559  513907 cri.go:89] found id: ""
	I1123 10:10:37.085614  513907 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:10:37.096664  513907 retry.go:31] will retry after 350.653777ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:37Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:10:37.448300  513907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:10:37.461378  513907 pause.go:52] kubelet running: false
	I1123 10:10:37.461471  513907 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:10:37.637992  513907 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:10:37.638070  513907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:10:37.711888  513907 cri.go:89] found id: "9bf3dd205682ea3296e952ceb1dadbbe4532b2c1e06757abe529e2af9a50d562"
	I1123 10:10:37.711914  513907 cri.go:89] found id: "1bf03ed5a3dee20793e8e504c18ad29f97cbbd2454a960d77c4e4dfe52e1dde9"
	I1123 10:10:37.711920  513907 cri.go:89] found id: "f8f5c2f8b84b2f925f1dac344595832b43b0211b004448a1db7b9c23faf52228"
	I1123 10:10:37.711924  513907 cri.go:89] found id: "b44546f54a873112a74f2a82e7c9a205fd2e9c0e40cacf6ffa55b2b473ef0d36"
	I1123 10:10:37.711927  513907 cri.go:89] found id: "828cd3adcf6b5681aa7d384f69cb7566664e59a1ab84ee837327f44e3e645dfc"
	I1123 10:10:37.711931  513907 cri.go:89] found id: "34ee70a0be166e12e57fb579eaa0cb22b8873a626bdc6ae8d83d81bfcbff7280"
	I1123 10:10:37.711934  513907 cri.go:89] found id: "98f50d387d5b2fded7f07e260ceb83bce5a609dc2bd07303f78f93578f6d82ed"
	I1123 10:10:37.711937  513907 cri.go:89] found id: "676b2dbee75eee912c3a604195863ba16974dcbd9b686ff17513a405a42b3e91"
	I1123 10:10:37.711939  513907 cri.go:89] found id: "ea67be45b14c0ca0ac41632b23ebd8095b8b2a16235fddfd8d5a4b1519577720"
	I1123 10:10:37.711951  513907 cri.go:89] found id: "c5ec8602847c185ff0bd5b175bcda823368a9380dc871c5be2c3ac84fd5e4292"
	I1123 10:10:37.711955  513907 cri.go:89] found id: "02db52ed7a4e551150e8645311a4cfd60769ef5552d108ac02a63489a373aba2"
	I1123 10:10:37.711958  513907 cri.go:89] found id: ""
	I1123 10:10:37.712007  513907 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:10:37.723663  513907 retry.go:31] will retry after 386.214894ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:37Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:10:38.111043  513907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:10:38.124370  513907 pause.go:52] kubelet running: false
	I1123 10:10:38.124438  513907 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:10:38.288383  513907 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:10:38.288519  513907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:10:38.355358  513907 cri.go:89] found id: "9bf3dd205682ea3296e952ceb1dadbbe4532b2c1e06757abe529e2af9a50d562"
	I1123 10:10:38.355381  513907 cri.go:89] found id: "1bf03ed5a3dee20793e8e504c18ad29f97cbbd2454a960d77c4e4dfe52e1dde9"
	I1123 10:10:38.355387  513907 cri.go:89] found id: "f8f5c2f8b84b2f925f1dac344595832b43b0211b004448a1db7b9c23faf52228"
	I1123 10:10:38.355390  513907 cri.go:89] found id: "b44546f54a873112a74f2a82e7c9a205fd2e9c0e40cacf6ffa55b2b473ef0d36"
	I1123 10:10:38.355394  513907 cri.go:89] found id: "828cd3adcf6b5681aa7d384f69cb7566664e59a1ab84ee837327f44e3e645dfc"
	I1123 10:10:38.355398  513907 cri.go:89] found id: "34ee70a0be166e12e57fb579eaa0cb22b8873a626bdc6ae8d83d81bfcbff7280"
	I1123 10:10:38.355401  513907 cri.go:89] found id: "98f50d387d5b2fded7f07e260ceb83bce5a609dc2bd07303f78f93578f6d82ed"
	I1123 10:10:38.355404  513907 cri.go:89] found id: "676b2dbee75eee912c3a604195863ba16974dcbd9b686ff17513a405a42b3e91"
	I1123 10:10:38.355407  513907 cri.go:89] found id: "ea67be45b14c0ca0ac41632b23ebd8095b8b2a16235fddfd8d5a4b1519577720"
	I1123 10:10:38.355418  513907 cri.go:89] found id: "c5ec8602847c185ff0bd5b175bcda823368a9380dc871c5be2c3ac84fd5e4292"
	I1123 10:10:38.355421  513907 cri.go:89] found id: "02db52ed7a4e551150e8645311a4cfd60769ef5552d108ac02a63489a373aba2"
	I1123 10:10:38.355424  513907 cri.go:89] found id: ""
	I1123 10:10:38.355477  513907 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:10:38.370342  513907 out.go:203] 
	W1123 10:10:38.373309  513907 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:10:38.373330  513907 out.go:285] * 
	* 
	W1123 10:10:38.380559  513907 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:10:38.383612  513907 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-706028 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-706028
helpers_test.go:243: (dbg) docker inspect old-k8s-version-706028:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5",
	        "Created": "2025-11-23T10:08:00.027667236Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 510168,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:09:26.542902134Z",
	            "FinishedAt": "2025-11-23T10:09:24.400275811Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/hosts",
	        "LogPath": "/var/lib/docker/containers/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5-json.log",
	        "Name": "/old-k8s-version-706028",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-706028:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-706028",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5",
	                "LowerDir": "/var/lib/docker/overlay2/4fc786c1031046370668829710493e9535cd397f4cc7ed5d9f51a091e2219a9e-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4fc786c1031046370668829710493e9535cd397f4cc7ed5d9f51a091e2219a9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4fc786c1031046370668829710493e9535cd397f4cc7ed5d9f51a091e2219a9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4fc786c1031046370668829710493e9535cd397f4cc7ed5d9f51a091e2219a9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-706028",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-706028/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-706028",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-706028",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-706028",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "baaa0cc534cbbf7405ae4da7621b549237687412c070314472d691f1a5b76d6e",
	            "SandboxKey": "/var/run/docker/netns/baaa0cc534cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-706028": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:9b:30:a3:33:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "38827229c06574d77dd6a72b1084a1de5267d818d9a4bc2e2e69c7834d9baf50",
	                    "EndpointID": "cb45cd812b1e7d00ec7c8bfe3737b03e040183cd7549ebc5db08a2f7512eec58",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-706028",
	                        "ec71fb4cb0c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706028 -n old-k8s-version-706028
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706028 -n old-k8s-version-706028: exit status 2 (338.182868ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-706028 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-706028 logs -n 25: (1.756484788s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p calico-507563 sudo docker system info                                                                                                                                                                                                      │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cri-dockerd --version                                                                                                                                                                                                   │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo containerd config dump                                                                                                                                                                                                  │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo crio config                                                                                                                                                                                                             │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ delete  │ -p calico-507563                                                                                                                                                                                                                              │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	│ stop    │ -p old-k8s-version-706028 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-706028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p old-k8s-version-706028 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-020224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ stop    │ -p no-preload-020224 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ image   │ old-k8s-version-706028 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ pause   │ -p old-k8s-version-706028 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:09:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:09:25.816605  510025 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:09:25.816804  510025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:09:25.816840  510025 out.go:374] Setting ErrFile to fd 2...
	I1123 10:09:25.816861  510025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:09:25.817185  510025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:09:25.818560  510025 out.go:368] Setting JSON to false
	I1123 10:09:25.819894  510025 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10315,"bootTime":1763882251,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:09:25.819991  510025 start.go:143] virtualization:  
	I1123 10:09:25.825487  510025 out.go:179] * [old-k8s-version-706028] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:09:25.828661  510025 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:09:25.828713  510025 notify.go:221] Checking for updates...
	I1123 10:09:25.832718  510025 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:09:25.836124  510025 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:09:25.839233  510025 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:09:25.842224  510025 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:09:25.845044  510025 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:09:25.848550  510025 config.go:182] Loaded profile config "old-k8s-version-706028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:09:25.852238  510025 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 10:09:25.855323  510025 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:09:25.903237  510025 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:09:25.903350  510025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:09:26.147979  510025 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 10:09:26.130627918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:09:26.148097  510025 docker.go:319] overlay module found
	I1123 10:09:26.151260  510025 out.go:179] * Using the docker driver based on existing profile
	I1123 10:09:23.838253  507023 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.596998991s)
	I1123 10:09:23.838283  507023 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1123 10:09:23.838302  507023 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1123 10:09:23.838375  507023 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1123 10:09:24.620329  507023 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1123 10:09:24.620360  507023 cache_images.go:125] Successfully loaded all cached images
	I1123 10:09:24.620366  507023 cache_images.go:94] duration metric: took 14.298706809s to LoadCachedImages
	I1123 10:09:24.620378  507023 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 10:09:24.620469  507023 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-020224 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:09:24.620550  507023 ssh_runner.go:195] Run: crio config
	I1123 10:09:24.691576  507023 cni.go:84] Creating CNI manager for ""
	I1123 10:09:24.691650  507023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:09:24.691684  507023 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:09:24.691736  507023 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-020224 NodeName:no-preload-020224 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:09:24.691911  507023 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-020224"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:09:24.692015  507023 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:09:24.700497  507023 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1123 10:09:24.700576  507023 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1123 10:09:24.708887  507023 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1123 10:09:24.708901  507023 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1123 10:09:24.708930  507023 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1123 10:09:24.709239  507023 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1123 10:09:24.714192  507023 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1123 10:09:24.714228  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1123 10:09:25.758823  507023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:09:25.782186  507023 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1123 10:09:25.786247  507023 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1123 10:09:25.786286  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1123 10:09:25.793611  507023 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1123 10:09:25.815702  507023 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1123 10:09:25.815733  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1123 10:09:26.154246  510025 start.go:309] selected driver: docker
	I1123 10:09:26.154265  510025 start.go:927] validating driver "docker" against &{Name:old-k8s-version-706028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706028 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:09:26.154366  510025 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:09:26.155353  510025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:09:26.350909  510025 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 10:09:26.339020749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:09:26.351249  510025 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:09:26.351268  510025 cni.go:84] Creating CNI manager for ""
	I1123 10:09:26.351323  510025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:09:26.351355  510025 start.go:353] cluster config:
	{Name:old-k8s-version-706028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:09:26.354737  510025 out.go:179] * Starting "old-k8s-version-706028" primary control-plane node in "old-k8s-version-706028" cluster
	I1123 10:09:26.357772  510025 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:09:26.360939  510025 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:09:26.363845  510025 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 10:09:26.363890  510025 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 10:09:26.363905  510025 cache.go:65] Caching tarball of preloaded images
	I1123 10:09:26.364010  510025 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:09:26.364291  510025 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:09:26.364306  510025 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 10:09:26.364414  510025 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/config.json ...
	I1123 10:09:26.442931  510025 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:09:26.442958  510025 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:09:26.442973  510025 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:09:26.443005  510025 start.go:360] acquireMachinesLock for old-k8s-version-706028: {Name:mkc18f399d53c3cb3fccf9a7a08ad7a013834dfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:09:26.443075  510025 start.go:364] duration metric: took 41.864µs to acquireMachinesLock for "old-k8s-version-706028"
	I1123 10:09:26.443100  510025 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:09:26.443105  510025 fix.go:54] fixHost starting: 
	I1123 10:09:26.443366  510025 cli_runner.go:164] Run: docker container inspect old-k8s-version-706028 --format={{.State.Status}}
	I1123 10:09:26.499045  510025 fix.go:112] recreateIfNeeded on old-k8s-version-706028: state=Stopped err=<nil>
	W1123 10:09:26.499074  510025 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:09:26.616084  507023 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:09:26.626854  507023 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:09:26.651428  507023 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:09:26.669586  507023 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 10:09:26.699724  507023 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:09:26.703588  507023 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:09:26.713718  507023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:09:26.886815  507023 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:09:26.905260  507023 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224 for IP: 192.168.85.2
	I1123 10:09:26.905278  507023 certs.go:195] generating shared ca certs ...
	I1123 10:09:26.905297  507023 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:26.905445  507023 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:09:26.905495  507023 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:09:26.905504  507023 certs.go:257] generating profile certs ...
	I1123 10:09:26.905556  507023 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.key
	I1123 10:09:26.905566  507023 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.crt with IP's: []
	I1123 10:09:27.397684  507023 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.crt ...
	I1123 10:09:27.397758  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.crt: {Name:mka9c1ced24aa3b11a897581db54eee96552e175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:27.398010  507023 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.key ...
	I1123 10:09:27.398051  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.key: {Name:mk59968ca778aae4afdab8270d7f3819ccf3d5c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:27.399875  507023 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key.d87566b3
	I1123 10:09:27.399951  507023 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt.d87566b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 10:09:27.583852  507023 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt.d87566b3 ...
	I1123 10:09:27.585952  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt.d87566b3: {Name:mk236b3518a6eed5134f9b2df5f74ef82cc2c700 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:27.588093  507023 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key.d87566b3 ...
	I1123 10:09:27.588159  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key.d87566b3: {Name:mkcf10d547d84f16a6e995b1f68dd90878114d77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:27.588360  507023 certs.go:382] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt.d87566b3 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt
	I1123 10:09:27.588477  507023 certs.go:386] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key.d87566b3 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key
	I1123 10:09:27.588580  507023 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.key
	I1123 10:09:27.588626  507023 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.crt with IP's: []
	I1123 10:09:27.983398  507023 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.crt ...
	I1123 10:09:27.983472  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.crt: {Name:mka4b2bc3a3f34803c036958ba4ccf37c25d1d49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:27.983676  507023 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.key ...
	I1123 10:09:27.983718  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.key: {Name:mkb311f5d3f3360de8949fed7bef66d4cce7e547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:27.983949  507023 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:09:27.984017  507023 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:09:27.984044  507023 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:09:27.984095  507023 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:09:27.984142  507023 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:09:27.984187  507023 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:09:27.984259  507023 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:09:27.984839  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:09:28.007669  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:09:28.031070  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:09:28.053034  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:09:28.086948  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:09:28.107917  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:09:28.126138  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:09:28.145910  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:09:28.164185  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:09:28.182701  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:09:28.200422  507023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:09:28.217200  507023 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:09:28.229856  507023 ssh_runner.go:195] Run: openssl version
	I1123 10:09:28.236225  507023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:09:28.244515  507023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:09:28.248365  507023 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:09:28.248487  507023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:09:28.289468  507023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:09:28.297972  507023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:09:28.306132  507023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:09:28.310156  507023 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:09:28.310248  507023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:09:28.350913  507023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:09:28.359461  507023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:09:28.369292  507023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:09:28.373267  507023 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:09:28.373384  507023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:09:28.414655  507023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:09:28.423253  507023 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:09:28.427087  507023 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:09:28.427142  507023 kubeadm.go:401] StartCluster: {Name:no-preload-020224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:09:28.427217  507023 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:09:28.427279  507023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:09:28.453943  507023 cri.go:89] found id: ""
	I1123 10:09:28.454025  507023 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:09:28.462251  507023 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:09:28.475099  507023 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:09:28.475165  507023 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:09:28.483145  507023 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:09:28.483170  507023 kubeadm.go:158] found existing configuration files:
	
	I1123 10:09:28.483229  507023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:09:28.490837  507023 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:09:28.490951  507023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:09:28.498450  507023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:09:28.506274  507023 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:09:28.506398  507023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:09:28.514175  507023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:09:28.522316  507023 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:09:28.522432  507023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:09:28.529966  507023 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:09:28.538073  507023 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:09:28.538172  507023 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:09:28.545850  507023 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:09:28.584444  507023 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:09:28.584672  507023 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:09:28.614264  507023 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:09:28.614343  507023 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:09:28.614387  507023 kubeadm.go:319] OS: Linux
	I1123 10:09:28.614434  507023 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:09:28.614487  507023 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:09:28.614538  507023 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:09:28.614590  507023 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:09:28.614642  507023 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:09:28.614695  507023 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:09:28.614744  507023 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:09:28.614795  507023 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:09:28.614845  507023 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:09:28.693040  507023 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:09:28.693231  507023 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:09:28.693375  507023 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:09:28.708070  507023 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:09:26.502788  510025 out.go:252] * Restarting existing docker container for "old-k8s-version-706028" ...
	I1123 10:09:26.502927  510025 cli_runner.go:164] Run: docker start old-k8s-version-706028
	I1123 10:09:26.842666  510025 cli_runner.go:164] Run: docker container inspect old-k8s-version-706028 --format={{.State.Status}}
	I1123 10:09:26.869002  510025 kic.go:430] container "old-k8s-version-706028" state is running.
	I1123 10:09:26.871983  510025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706028
	I1123 10:09:26.905549  510025 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/config.json ...
	I1123 10:09:26.905768  510025 machine.go:94] provisionDockerMachine start ...
	I1123 10:09:26.905823  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:26.966070  510025 main.go:143] libmachine: Using SSH client type: native
	I1123 10:09:26.966431  510025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1123 10:09:26.966439  510025 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:09:26.967504  510025 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:09:30.145489  510025 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-706028
	
	I1123 10:09:30.145567  510025 ubuntu.go:182] provisioning hostname "old-k8s-version-706028"
	I1123 10:09:30.145672  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:30.175168  510025 main.go:143] libmachine: Using SSH client type: native
	I1123 10:09:30.175494  510025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1123 10:09:30.175513  510025 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-706028 && echo "old-k8s-version-706028" | sudo tee /etc/hostname
	I1123 10:09:30.348439  510025 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-706028
	
	I1123 10:09:30.348567  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:30.371755  510025 main.go:143] libmachine: Using SSH client type: native
	I1123 10:09:30.372077  510025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1123 10:09:30.372101  510025 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-706028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-706028/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-706028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:09:30.529699  510025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:09:30.529765  510025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:09:30.529803  510025 ubuntu.go:190] setting up certificates
	I1123 10:09:30.529844  510025 provision.go:84] configureAuth start
	I1123 10:09:30.529921  510025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706028
	I1123 10:09:30.551738  510025 provision.go:143] copyHostCerts
	I1123 10:09:30.551808  510025 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:09:30.551816  510025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:09:30.551891  510025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:09:30.551987  510025 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:09:30.551992  510025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:09:30.552017  510025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:09:30.552069  510025 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:09:30.552073  510025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:09:30.552097  510025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:09:30.552141  510025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-706028 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-706028]
	I1123 10:09:30.761442  510025 provision.go:177] copyRemoteCerts
	I1123 10:09:30.761561  510025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:09:30.761630  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:30.778968  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:28.714115  507023 out.go:252]   - Generating certificates and keys ...
	I1123 10:09:28.714230  507023 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:09:28.714309  507023 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:09:29.080677  507023 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:09:29.157548  507023 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:09:29.332005  507023 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:09:30.090914  507023 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:09:30.894112  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 10:09:30.927541  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:09:30.948670  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:09:30.968656  510025 provision.go:87] duration metric: took 438.773277ms to configureAuth
	I1123 10:09:30.968684  510025 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:09:30.968872  510025 config.go:182] Loaded profile config "old-k8s-version-706028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:09:30.968979  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:30.987669  510025 main.go:143] libmachine: Using SSH client type: native
	I1123 10:09:30.987985  510025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1123 10:09:30.988005  510025 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:09:31.389807  510025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:09:31.389827  510025 machine.go:97] duration metric: took 4.484048435s to provisionDockerMachine
	I1123 10:09:31.389838  510025 start.go:293] postStartSetup for "old-k8s-version-706028" (driver="docker")
	I1123 10:09:31.389861  510025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:09:31.389921  510025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:09:31.389969  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:31.422998  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:31.533527  510025 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:09:31.537904  510025 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:09:31.537928  510025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:09:31.537939  510025 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:09:31.537997  510025 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:09:31.538071  510025 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:09:31.538167  510025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:09:31.547212  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:09:31.566787  510025 start.go:296] duration metric: took 176.934739ms for postStartSetup
	I1123 10:09:31.566876  510025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:09:31.566914  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:31.586038  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:31.695308  510025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:09:31.700925  510025 fix.go:56] duration metric: took 5.257813295s for fixHost
	I1123 10:09:31.700951  510025 start.go:83] releasing machines lock for "old-k8s-version-706028", held for 5.25786347s
	I1123 10:09:31.701025  510025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706028
	I1123 10:09:31.717675  510025 ssh_runner.go:195] Run: cat /version.json
	I1123 10:09:31.717731  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:31.717982  510025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:09:31.718048  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:31.750832  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:31.759857  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:31.972320  510025 ssh_runner.go:195] Run: systemctl --version
	I1123 10:09:31.979274  510025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:09:32.024162  510025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:09:32.029896  510025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:09:32.029986  510025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:09:32.039008  510025 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:09:32.039052  510025 start.go:496] detecting cgroup driver to use...
	I1123 10:09:32.039088  510025 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:09:32.039162  510025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:09:32.055896  510025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:09:32.070659  510025 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:09:32.070741  510025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:09:32.087808  510025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:09:32.102639  510025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:09:32.249883  510025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:09:32.420466  510025 docker.go:234] disabling docker service ...
	I1123 10:09:32.420539  510025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:09:32.436232  510025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:09:32.452606  510025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:09:32.590344  510025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:09:32.741257  510025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:09:32.755631  510025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:09:32.770001  510025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1123 10:09:32.770115  510025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.778798  510025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:09:32.778903  510025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.787709  510025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.796376  510025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.805076  510025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:09:32.812834  510025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.821375  510025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.829591  510025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:09:32.838369  510025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:09:32.846236  510025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:09:32.853580  510025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:09:32.996304  510025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:09:33.201736  510025 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:09:33.201845  510025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:09:33.210124  510025 start.go:564] Will wait 60s for crictl version
	I1123 10:09:33.210242  510025 ssh_runner.go:195] Run: which crictl
	I1123 10:09:33.214251  510025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:09:33.259548  510025 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:09:33.259706  510025 ssh_runner.go:195] Run: crio --version
	I1123 10:09:33.292540  510025 ssh_runner.go:195] Run: crio --version
	I1123 10:09:33.335336  510025 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1123 10:09:33.337796  510025 cli_runner.go:164] Run: docker network inspect old-k8s-version-706028 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:09:33.363399  510025 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:09:33.367380  510025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:09:33.377221  510025 kubeadm.go:884] updating cluster {Name:old-k8s-version-706028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706028 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:09:33.377328  510025 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 10:09:33.377380  510025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:09:33.433278  510025 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:09:33.433297  510025 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:09:33.433350  510025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:09:33.484055  510025 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:09:33.484129  510025 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:09:33.484151  510025 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1123 10:09:33.484289  510025 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-706028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:09:33.484411  510025 ssh_runner.go:195] Run: crio config
	I1123 10:09:33.574456  510025 cni.go:84] Creating CNI manager for ""
	I1123 10:09:33.574527  510025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:09:33.574562  510025 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:09:33.574614  510025 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-706028 NodeName:old-k8s-version-706028 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:09:33.574801  510025 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-706028"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:09:33.574916  510025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 10:09:33.583770  510025 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:09:33.583888  510025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:09:33.592261  510025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1123 10:09:33.612594  510025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:09:33.633150  510025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1123 10:09:33.652223  510025 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:09:33.656430  510025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:09:33.666775  510025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:09:33.796040  510025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:09:33.810658  510025 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028 for IP: 192.168.76.2
	I1123 10:09:33.810720  510025 certs.go:195] generating shared ca certs ...
	I1123 10:09:33.810758  510025 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:33.812518  510025 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:09:33.812630  510025 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:09:33.812669  510025 certs.go:257] generating profile certs ...
	I1123 10:09:33.812819  510025 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.key
	I1123 10:09:33.812924  510025 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/apiserver.key.494e02ad
	I1123 10:09:33.813028  510025 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/proxy-client.key
	I1123 10:09:33.813198  510025 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:09:33.813266  510025 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:09:33.813291  510025 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:09:33.813348  510025 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:09:33.813437  510025 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:09:33.813508  510025 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:09:33.813598  510025 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:09:33.814304  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:09:33.846039  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:09:33.886031  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:09:33.929221  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:09:33.983668  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 10:09:34.031411  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:09:34.094565  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:09:34.138846  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:09:34.166231  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:09:34.185743  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:09:34.204334  510025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:09:34.222295  510025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:09:34.235383  510025 ssh_runner.go:195] Run: openssl version
	I1123 10:09:34.242256  510025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:09:34.250714  510025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:09:34.254462  510025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:09:34.254565  510025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:09:34.301841  510025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:09:34.310849  510025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:09:34.319474  510025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:09:34.323372  510025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:09:34.323484  510025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:09:34.367045  510025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:09:34.375453  510025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:09:34.384024  510025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:09:34.387853  510025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:09:34.387963  510025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:09:34.431172  510025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:09:34.439459  510025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:09:34.443711  510025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:09:34.490008  510025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:09:34.533374  510025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:09:34.590145  510025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:09:34.702355  510025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:09:34.785341  510025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:09:34.845048  510025 kubeadm.go:401] StartCluster: {Name:old-k8s-version-706028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-706028 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:09:34.845150  510025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:09:34.845214  510025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:09:35.032137  510025 cri.go:89] found id: "34ee70a0be166e12e57fb579eaa0cb22b8873a626bdc6ae8d83d81bfcbff7280"
	I1123 10:09:35.032160  510025 cri.go:89] found id: "98f50d387d5b2fded7f07e260ceb83bce5a609dc2bd07303f78f93578f6d82ed"
	I1123 10:09:35.032166  510025 cri.go:89] found id: "676b2dbee75eee912c3a604195863ba16974dcbd9b686ff17513a405a42b3e91"
	I1123 10:09:35.032175  510025 cri.go:89] found id: "ea67be45b14c0ca0ac41632b23ebd8095b8b2a16235fddfd8d5a4b1519577720"
	I1123 10:09:35.032179  510025 cri.go:89] found id: ""
	I1123 10:09:35.032229  510025 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:09:35.091309  510025 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:09:35Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:09:35.091403  510025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:09:35.125779  510025 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:09:35.125800  510025 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:09:35.125866  510025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:09:35.161965  510025 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:09:35.162380  510025 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-706028" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:09:35.162489  510025 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-706028" cluster setting kubeconfig missing "old-k8s-version-706028" context setting]
	I1123 10:09:35.162844  510025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:35.164106  510025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:09:35.205918  510025 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:09:35.205955  510025 kubeadm.go:602] duration metric: took 80.147689ms to restartPrimaryControlPlane
	I1123 10:09:35.205965  510025 kubeadm.go:403] duration metric: took 360.928776ms to StartCluster
	I1123 10:09:35.205982  510025 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:35.206051  510025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:09:35.206650  510025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:35.206865  510025 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:09:35.207183  510025 config.go:182] Loaded profile config "old-k8s-version-706028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 10:09:35.207233  510025 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:09:35.207369  510025 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-706028"
	I1123 10:09:35.207390  510025 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-706028"
	W1123 10:09:35.207406  510025 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:09:35.207428  510025 host.go:66] Checking if "old-k8s-version-706028" exists ...
	I1123 10:09:35.207937  510025 cli_runner.go:164] Run: docker container inspect old-k8s-version-706028 --format={{.State.Status}}
	I1123 10:09:35.208296  510025 addons.go:70] Setting dashboard=true in profile "old-k8s-version-706028"
	I1123 10:09:35.208332  510025 addons.go:239] Setting addon dashboard=true in "old-k8s-version-706028"
	W1123 10:09:35.208342  510025 addons.go:248] addon dashboard should already be in state true
	I1123 10:09:35.208366  510025 host.go:66] Checking if "old-k8s-version-706028" exists ...
	I1123 10:09:35.208602  510025 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-706028"
	I1123 10:09:35.208619  510025 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-706028"
	I1123 10:09:35.208864  510025 cli_runner.go:164] Run: docker container inspect old-k8s-version-706028 --format={{.State.Status}}
	I1123 10:09:35.209320  510025 cli_runner.go:164] Run: docker container inspect old-k8s-version-706028 --format={{.State.Status}}
	I1123 10:09:35.212760  510025 out.go:179] * Verifying Kubernetes components...
	I1123 10:09:35.220601  510025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:09:35.253078  510025 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:09:35.256625  510025 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:09:35.264273  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:09:35.264307  510025 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:09:35.264395  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:35.275400  510025 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:09:35.276507  510025 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-706028"
	W1123 10:09:35.276525  510025 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:09:35.276550  510025 host.go:66] Checking if "old-k8s-version-706028" exists ...
	I1123 10:09:35.276978  510025 cli_runner.go:164] Run: docker container inspect old-k8s-version-706028 --format={{.State.Status}}
	I1123 10:09:35.279173  510025 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:09:35.279198  510025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:09:35.279264  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:35.320480  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:35.338505  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:35.341705  510025 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:09:35.341724  510025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:09:35.341785  510025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706028
	I1123 10:09:35.370465  510025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/old-k8s-version-706028/id_rsa Username:docker}
	I1123 10:09:35.716966  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:09:35.717046  510025 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:09:35.729966  510025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:09:35.746727  510025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:09:32.026756  507023 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:09:32.027354  507023 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-020224] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:09:32.865744  507023 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:09:32.865886  507023 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-020224] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:09:33.737912  507023 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:09:34.527711  507023 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:09:35.657738  507023 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:09:35.659174  507023 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:09:36.865749  507023 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:09:37.179989  507023 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:09:37.713652  507023 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:09:37.857350  507023 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:09:38.059996  507023 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:09:38.061199  507023 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:09:38.072369  507023 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:09:35.843403  510025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:09:35.859066  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:09:35.859138  510025 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:09:35.886520  510025 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-706028" to be "Ready" ...
	I1123 10:09:35.977880  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:09:35.977956  510025 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:09:36.134230  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:09:36.134302  510025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:09:36.234173  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:09:36.234246  510025 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:09:36.313888  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:09:36.313967  510025 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:09:36.349608  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:09:36.349681  510025 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:09:36.380793  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:09:36.380868  510025 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:09:36.438939  510025 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:09:36.439013  510025 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:09:36.468020  510025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:09:38.075902  507023 out.go:252]   - Booting up control plane ...
	I1123 10:09:38.076014  507023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:09:38.076345  507023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:09:38.077797  507023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:09:38.095384  507023 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:09:38.095493  507023 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:09:38.105104  507023 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:09:38.105206  507023 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:09:38.105245  507023 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:09:38.317472  507023 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:09:38.317595  507023 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:09:39.321755  507023 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000868856s
	I1123 10:09:39.321863  507023 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:09:39.321944  507023 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 10:09:39.322033  507023 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:09:39.322111  507023 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:09:42.012477  510025 node_ready.go:49] node "old-k8s-version-706028" is "Ready"
	I1123 10:09:42.012508  510025 node_ready.go:38] duration metric: took 6.125883767s for node "old-k8s-version-706028" to be "Ready" ...
	I1123 10:09:42.012523  510025 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:09:42.012590  510025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:09:44.653863  510025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.907055628s)
	I1123 10:09:45.467460  510025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.623967138s)
	I1123 10:09:46.053767  510025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.58565667s)
	I1123 10:09:46.053985  510025 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.041383108s)
	I1123 10:09:46.054014  510025 api_server.go:72] duration metric: took 10.847108011s to wait for apiserver process to appear ...
	I1123 10:09:46.054022  510025 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:09:46.054039  510025 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:09:46.056860  510025 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-706028 addons enable metrics-server
	
	I1123 10:09:46.059785  510025 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 10:09:44.569909  507023 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.248185167s
	I1123 10:09:47.270468  507023 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.94909917s
	I1123 10:09:48.823470  507023 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.501915291s
	I1123 10:09:48.847000  507023 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:09:48.867488  507023 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:09:48.883089  507023 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:09:48.883305  507023 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-020224 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:09:48.895711  507023 kubeadm.go:319] [bootstrap-token] Using token: 8qqp89.w1nl5taaj7197tdy
	I1123 10:09:48.898487  507023 out.go:252]   - Configuring RBAC rules ...
	I1123 10:09:48.898611  507023 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:09:48.903949  507023 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:09:48.912548  507023 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:09:48.916794  507023 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:09:48.923117  507023 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:09:48.927219  507023 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:09:49.231985  507023 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:09:49.660319  507023 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:09:50.232963  507023 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:09:50.234135  507023 kubeadm.go:319] 
	I1123 10:09:50.234220  507023 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:09:50.234231  507023 kubeadm.go:319] 
	I1123 10:09:50.234307  507023 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:09:50.234320  507023 kubeadm.go:319] 
	I1123 10:09:50.234346  507023 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:09:50.234409  507023 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:09:50.234463  507023 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:09:50.234471  507023 kubeadm.go:319] 
	I1123 10:09:50.234532  507023 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:09:50.234540  507023 kubeadm.go:319] 
	I1123 10:09:50.234588  507023 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:09:50.234595  507023 kubeadm.go:319] 
	I1123 10:09:50.234647  507023 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:09:50.234725  507023 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:09:50.234803  507023 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:09:50.234811  507023 kubeadm.go:319] 
	I1123 10:09:50.234895  507023 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:09:50.234971  507023 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:09:50.234975  507023 kubeadm.go:319] 
	I1123 10:09:50.235059  507023 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8qqp89.w1nl5taaj7197tdy \
	I1123 10:09:50.235162  507023 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e \
	I1123 10:09:50.235182  507023 kubeadm.go:319] 	--control-plane 
	I1123 10:09:50.235186  507023 kubeadm.go:319] 
	I1123 10:09:50.235270  507023 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:09:50.235276  507023 kubeadm.go:319] 
	I1123 10:09:50.235358  507023 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8qqp89.w1nl5taaj7197tdy \
	I1123 10:09:50.235461  507023 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e 
	I1123 10:09:50.239404  507023 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 10:09:50.239637  507023 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 10:09:50.239747  507023 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:09:50.239773  507023 cni.go:84] Creating CNI manager for ""
	I1123 10:09:50.239831  507023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:09:50.244833  507023 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:09:46.062661  510025 addons.go:530] duration metric: took 10.855424996s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 10:09:46.072442  510025 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:09:46.074321  510025 api_server.go:141] control plane version: v1.28.0
	I1123 10:09:46.074345  510025 api_server.go:131] duration metric: took 20.317301ms to wait for apiserver health ...
	I1123 10:09:46.074354  510025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:09:46.079068  510025 system_pods.go:59] 8 kube-system pods found
	I1123 10:09:46.079157  510025 system_pods.go:61] "coredns-5dd5756b68-h6b8n" [11c29962-a28a-4015-9014-96acb48fefc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:09:46.079182  510025 system_pods.go:61] "etcd-old-k8s-version-706028" [994d2bc9-8d4e-4211-a391-67531749ae73] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:09:46.079250  510025 system_pods.go:61] "kindnet-6l8w5" [3045e3bc-b846-45c6-a4ff-39e877bbf8ef] Running
	I1123 10:09:46.079279  510025 system_pods.go:61] "kube-apiserver-old-k8s-version-706028" [5fdf9127-966c-4a06-8fd6-4c3ae574b0a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:09:46.079314  510025 system_pods.go:61] "kube-controller-manager-old-k8s-version-706028" [5c49bac9-5830-437f-bf92-5caffda221fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:09:46.079340  510025 system_pods.go:61] "kube-proxy-s9rqv" [2aea0615-8684-4805-8c5d-f37fb042cc30] Running
	I1123 10:09:46.079362  510025 system_pods.go:61] "kube-scheduler-old-k8s-version-706028" [bd09b544-f854-4b15-a1ea-124bdfb16b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:09:46.079400  510025 system_pods.go:61] "storage-provisioner" [4bc52b3c-0d21-412d-bf6b-74f8dab91ac1] Running
	I1123 10:09:46.079427  510025 system_pods.go:74] duration metric: took 5.066869ms to wait for pod list to return data ...
	I1123 10:09:46.079451  510025 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:09:46.087656  510025 default_sa.go:45] found service account: "default"
	I1123 10:09:46.087729  510025 default_sa.go:55] duration metric: took 8.240045ms for default service account to be created ...
	I1123 10:09:46.087753  510025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:09:46.091525  510025 system_pods.go:86] 8 kube-system pods found
	I1123 10:09:46.091609  510025 system_pods.go:89] "coredns-5dd5756b68-h6b8n" [11c29962-a28a-4015-9014-96acb48fefc1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:09:46.091635  510025 system_pods.go:89] "etcd-old-k8s-version-706028" [994d2bc9-8d4e-4211-a391-67531749ae73] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:09:46.091674  510025 system_pods.go:89] "kindnet-6l8w5" [3045e3bc-b846-45c6-a4ff-39e877bbf8ef] Running
	I1123 10:09:46.091701  510025 system_pods.go:89] "kube-apiserver-old-k8s-version-706028" [5fdf9127-966c-4a06-8fd6-4c3ae574b0a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:09:46.091722  510025 system_pods.go:89] "kube-controller-manager-old-k8s-version-706028" [5c49bac9-5830-437f-bf92-5caffda221fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:09:46.091757  510025 system_pods.go:89] "kube-proxy-s9rqv" [2aea0615-8684-4805-8c5d-f37fb042cc30] Running
	I1123 10:09:46.091786  510025 system_pods.go:89] "kube-scheduler-old-k8s-version-706028" [bd09b544-f854-4b15-a1ea-124bdfb16b4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:09:46.091807  510025 system_pods.go:89] "storage-provisioner" [4bc52b3c-0d21-412d-bf6b-74f8dab91ac1] Running
	I1123 10:09:46.091844  510025 system_pods.go:126] duration metric: took 4.071472ms to wait for k8s-apps to be running ...
	I1123 10:09:46.091872  510025 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:09:46.091955  510025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:09:46.107867  510025 system_svc.go:56] duration metric: took 15.974153ms WaitForService to wait for kubelet
	I1123 10:09:46.107946  510025 kubeadm.go:587] duration metric: took 10.901047246s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:09:46.107988  510025 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:09:46.111285  510025 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:09:46.111365  510025 node_conditions.go:123] node cpu capacity is 2
	I1123 10:09:46.111408  510025 node_conditions.go:105] duration metric: took 3.378955ms to run NodePressure ...
	I1123 10:09:46.111440  510025 start.go:242] waiting for startup goroutines ...
	I1123 10:09:46.111464  510025 start.go:247] waiting for cluster config update ...
	I1123 10:09:46.111501  510025 start.go:256] writing updated cluster config ...
	I1123 10:09:46.111848  510025 ssh_runner.go:195] Run: rm -f paused
	I1123 10:09:46.116170  510025 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:09:46.123416  510025 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-h6b8n" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:09:48.129809  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:09:50.629071  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	I1123 10:09:50.247619  507023 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:09:50.251999  507023 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:09:50.252022  507023 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:09:50.265007  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:09:50.577755  507023 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:09:50.577898  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:50.577992  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-020224 minikube.k8s.io/updated_at=2025_11_23T10_09_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=no-preload-020224 minikube.k8s.io/primary=true
	I1123 10:09:50.716758  507023 ops.go:34] apiserver oom_adj: -16
	I1123 10:09:50.716862  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:51.217964  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:51.717695  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:52.216929  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:52.717348  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:53.217755  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:53.717021  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:54.217703  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:54.717265  507023 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:09:54.814720  507023 kubeadm.go:1114] duration metric: took 4.236868611s to wait for elevateKubeSystemPrivileges
	I1123 10:09:54.814756  507023 kubeadm.go:403] duration metric: took 26.387618997s to StartCluster
	I1123 10:09:54.814775  507023 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:54.814861  507023 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:09:54.815840  507023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:54.816077  507023 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:09:54.816093  507023 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:09:54.816362  507023 config.go:182] Loaded profile config "no-preload-020224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:09:54.816410  507023 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:09:54.816483  507023 addons.go:70] Setting storage-provisioner=true in profile "no-preload-020224"
	I1123 10:09:54.816499  507023 addons.go:239] Setting addon storage-provisioner=true in "no-preload-020224"
	I1123 10:09:54.816526  507023 host.go:66] Checking if "no-preload-020224" exists ...
	I1123 10:09:54.816525  507023 addons.go:70] Setting default-storageclass=true in profile "no-preload-020224"
	I1123 10:09:54.816543  507023 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-020224"
	I1123 10:09:54.816857  507023 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:09:54.817006  507023 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:09:54.819245  507023 out.go:179] * Verifying Kubernetes components...
	I1123 10:09:54.822295  507023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:09:54.856610  507023 addons.go:239] Setting addon default-storageclass=true in "no-preload-020224"
	I1123 10:09:54.856652  507023 host.go:66] Checking if "no-preload-020224" exists ...
	I1123 10:09:54.857066  507023 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:09:54.868394  507023 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1123 10:09:52.630556  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:09:55.139752  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	I1123 10:09:54.873034  507023 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:09:54.873060  507023 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:09:54.873144  507023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:09:54.900717  507023 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:09:54.900740  507023 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:09:54.900803  507023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:09:54.923047  507023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:09:54.943020  507023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:09:55.227160  507023 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:09:55.345474  507023 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:09:55.345586  507023 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:09:55.371195  507023 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:09:56.298483  507023 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.071290237s)
	I1123 10:09:56.299413  507023 node_ready.go:35] waiting up to 6m0s for node "no-preload-020224" to be "Ready" ...
	I1123 10:09:56.299703  507023 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 10:09:56.359702  507023 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 10:09:57.631453  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:09:59.647935  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	I1123 10:09:56.362681  507023 addons.go:530] duration metric: took 1.546265819s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:09:56.812226  507023 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-020224" context rescaled to 1 replicas
	W1123 10:09:58.303234  507023 node_ready.go:57] node "no-preload-020224" has "Ready":"False" status (will retry)
	W1123 10:10:00.312323  507023 node_ready.go:57] node "no-preload-020224" has "Ready":"False" status (will retry)
	W1123 10:10:02.133676  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:04.628699  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:02.809713  507023 node_ready.go:57] node "no-preload-020224" has "Ready":"False" status (will retry)
	W1123 10:10:05.303025  507023 node_ready.go:57] node "no-preload-020224" has "Ready":"False" status (will retry)
	W1123 10:10:06.630741  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:09.129396  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:07.805356  507023 node_ready.go:57] node "no-preload-020224" has "Ready":"False" status (will retry)
	W1123 10:10:09.806317  507023 node_ready.go:57] node "no-preload-020224" has "Ready":"False" status (will retry)
	I1123 10:10:11.803086  507023 node_ready.go:49] node "no-preload-020224" is "Ready"
	I1123 10:10:11.803113  507023 node_ready.go:38] duration metric: took 15.50367439s for node "no-preload-020224" to be "Ready" ...
	I1123 10:10:11.803127  507023 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:10:11.803184  507023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:10:11.834253  507023 api_server.go:72] duration metric: took 17.018129095s to wait for apiserver process to appear ...
	I1123 10:10:11.834277  507023 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:10:11.834296  507023 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:10:11.849038  507023 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:10:11.850114  507023 api_server.go:141] control plane version: v1.34.1
	I1123 10:10:11.850135  507023 api_server.go:131] duration metric: took 15.851432ms to wait for apiserver health ...
	I1123 10:10:11.850143  507023 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:10:11.852805  507023 system_pods.go:59] 8 kube-system pods found
	I1123 10:10:11.852831  507023 system_pods.go:61] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:11.852838  507023 system_pods.go:61] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running
	I1123 10:10:11.852843  507023 system_pods.go:61] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:11.852847  507023 system_pods.go:61] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running
	I1123 10:10:11.852851  507023 system_pods.go:61] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running
	I1123 10:10:11.852855  507023 system_pods.go:61] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:11.852858  507023 system_pods.go:61] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running
	I1123 10:10:11.852863  507023 system_pods.go:61] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:10:11.852869  507023 system_pods.go:74] duration metric: took 2.720269ms to wait for pod list to return data ...
	I1123 10:10:11.852877  507023 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:10:11.855391  507023 default_sa.go:45] found service account: "default"
	I1123 10:10:11.855410  507023 default_sa.go:55] duration metric: took 2.527075ms for default service account to be created ...
	I1123 10:10:11.855418  507023 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:10:11.861325  507023 system_pods.go:86] 8 kube-system pods found
	I1123 10:10:11.861459  507023 system_pods.go:89] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:11.861500  507023 system_pods.go:89] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running
	I1123 10:10:11.861516  507023 system_pods.go:89] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:11.861523  507023 system_pods.go:89] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running
	I1123 10:10:11.861528  507023 system_pods.go:89] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running
	I1123 10:10:11.861534  507023 system_pods.go:89] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:11.861538  507023 system_pods.go:89] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running
	I1123 10:10:11.861557  507023 system_pods.go:89] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:10:11.861581  507023 retry.go:31] will retry after 233.21626ms: missing components: kube-dns
	I1123 10:10:12.098878  507023 system_pods.go:86] 8 kube-system pods found
	I1123 10:10:12.098918  507023 system_pods.go:89] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:12.098926  507023 system_pods.go:89] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running
	I1123 10:10:12.098932  507023 system_pods.go:89] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:12.098959  507023 system_pods.go:89] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running
	I1123 10:10:12.098970  507023 system_pods.go:89] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running
	I1123 10:10:12.098974  507023 system_pods.go:89] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:12.098978  507023 system_pods.go:89] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running
	I1123 10:10:12.098987  507023 system_pods.go:89] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:10:12.099006  507023 retry.go:31] will retry after 381.18787ms: missing components: kube-dns
	I1123 10:10:12.483736  507023 system_pods.go:86] 8 kube-system pods found
	I1123 10:10:12.483772  507023 system_pods.go:89] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:12.483779  507023 system_pods.go:89] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running
	I1123 10:10:12.483796  507023 system_pods.go:89] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:12.483802  507023 system_pods.go:89] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running
	I1123 10:10:12.483807  507023 system_pods.go:89] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running
	I1123 10:10:12.483811  507023 system_pods.go:89] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:12.483814  507023 system_pods.go:89] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running
	I1123 10:10:12.483820  507023 system_pods.go:89] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:10:12.483834  507023 retry.go:31] will retry after 370.76949ms: missing components: kube-dns
	I1123 10:10:12.858596  507023 system_pods.go:86] 8 kube-system pods found
	I1123 10:10:12.858630  507023 system_pods.go:89] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:12.858638  507023 system_pods.go:89] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running
	I1123 10:10:12.858644  507023 system_pods.go:89] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:12.858648  507023 system_pods.go:89] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running
	I1123 10:10:12.858656  507023 system_pods.go:89] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running
	I1123 10:10:12.858660  507023 system_pods.go:89] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:12.858664  507023 system_pods.go:89] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running
	I1123 10:10:12.858670  507023 system_pods.go:89] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:10:12.858690  507023 retry.go:31] will retry after 522.823077ms: missing components: kube-dns
	I1123 10:10:13.385899  507023 system_pods.go:86] 8 kube-system pods found
	I1123 10:10:13.385933  507023 system_pods.go:89] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Running
	I1123 10:10:13.385940  507023 system_pods.go:89] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running
	I1123 10:10:13.385945  507023 system_pods.go:89] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:13.385950  507023 system_pods.go:89] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running
	I1123 10:10:13.385955  507023 system_pods.go:89] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running
	I1123 10:10:13.385959  507023 system_pods.go:89] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:13.385963  507023 system_pods.go:89] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running
	I1123 10:10:13.385967  507023 system_pods.go:89] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Running
	I1123 10:10:13.385974  507023 system_pods.go:126] duration metric: took 1.530550599s to wait for k8s-apps to be running ...
	I1123 10:10:13.385986  507023 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:10:13.386043  507023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:10:13.401187  507023 system_svc.go:56] duration metric: took 15.19181ms WaitForService to wait for kubelet
	I1123 10:10:13.401215  507023 kubeadm.go:587] duration metric: took 18.585095156s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:10:13.401232  507023 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:10:13.404855  507023 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:10:13.404884  507023 node_conditions.go:123] node cpu capacity is 2
	I1123 10:10:13.404897  507023 node_conditions.go:105] duration metric: took 3.659936ms to run NodePressure ...
	I1123 10:10:13.404911  507023 start.go:242] waiting for startup goroutines ...
	I1123 10:10:13.404918  507023 start.go:247] waiting for cluster config update ...
	I1123 10:10:13.404929  507023 start.go:256] writing updated cluster config ...
	I1123 10:10:13.405217  507023 ssh_runner.go:195] Run: rm -f paused
	I1123 10:10:13.412081  507023 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:10:13.415690  507023 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v59bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.421477  507023 pod_ready.go:94] pod "coredns-66bc5c9577-v59bz" is "Ready"
	I1123 10:10:13.421509  507023 pod_ready.go:86] duration metric: took 5.79196ms for pod "coredns-66bc5c9577-v59bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.424109  507023 pod_ready.go:83] waiting for pod "etcd-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.428887  507023 pod_ready.go:94] pod "etcd-no-preload-020224" is "Ready"
	I1123 10:10:13.428913  507023 pod_ready.go:86] duration metric: took 4.780094ms for pod "etcd-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.431451  507023 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.438399  507023 pod_ready.go:94] pod "kube-apiserver-no-preload-020224" is "Ready"
	I1123 10:10:13.438510  507023 pod_ready.go:86] duration metric: took 7.032745ms for pod "kube-apiserver-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.448875  507023 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:13.815948  507023 pod_ready.go:94] pod "kube-controller-manager-no-preload-020224" is "Ready"
	I1123 10:10:13.815975  507023 pod_ready.go:86] duration metric: took 367.025898ms for pod "kube-controller-manager-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:14.016632  507023 pod_ready.go:83] waiting for pod "kube-proxy-7s6pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:14.415916  507023 pod_ready.go:94] pod "kube-proxy-7s6pf" is "Ready"
	I1123 10:10:14.415991  507023 pod_ready.go:86] duration metric: took 399.329072ms for pod "kube-proxy-7s6pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:14.617741  507023 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:15.027806  507023 pod_ready.go:94] pod "kube-scheduler-no-preload-020224" is "Ready"
	I1123 10:10:15.027833  507023 pod_ready.go:86] duration metric: took 410.055337ms for pod "kube-scheduler-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:15.027848  507023 pod_ready.go:40] duration metric: took 1.615729566s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:10:15.102996  507023 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:10:15.108795  507023 out.go:179] * Done! kubectl is now configured to use "no-preload-020224" cluster and "default" namespace by default
	W1123 10:10:11.129640  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:13.629506  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:16.128952  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:18.129394  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	W1123 10:10:20.630325  510025 pod_ready.go:104] pod "coredns-5dd5756b68-h6b8n" is not "Ready", error: <nil>
	I1123 10:10:23.129810  510025 pod_ready.go:94] pod "coredns-5dd5756b68-h6b8n" is "Ready"
	I1123 10:10:23.129839  510025 pod_ready.go:86] duration metric: took 37.006351007s for pod "coredns-5dd5756b68-h6b8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.133135  510025 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.138151  510025 pod_ready.go:94] pod "etcd-old-k8s-version-706028" is "Ready"
	I1123 10:10:23.138175  510025 pod_ready.go:86] duration metric: took 5.008988ms for pod "etcd-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.141508  510025 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.146675  510025 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-706028" is "Ready"
	I1123 10:10:23.146701  510025 pod_ready.go:86] duration metric: took 5.169418ms for pod "kube-apiserver-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.149711  510025 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.328518  510025 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-706028" is "Ready"
	I1123 10:10:23.328544  510025 pod_ready.go:86] duration metric: took 178.810924ms for pod "kube-controller-manager-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.527352  510025 pod_ready.go:83] waiting for pod "kube-proxy-s9rqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:23.927131  510025 pod_ready.go:94] pod "kube-proxy-s9rqv" is "Ready"
	I1123 10:10:23.927158  510025 pod_ready.go:86] duration metric: took 399.732025ms for pod "kube-proxy-s9rqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:24.128621  510025 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:24.527488  510025 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-706028" is "Ready"
	I1123 10:10:24.527514  510025 pod_ready.go:86] duration metric: took 398.85625ms for pod "kube-scheduler-old-k8s-version-706028" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:10:24.527526  510025 pod_ready.go:40] duration metric: took 38.411281224s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:10:24.613564  510025 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 10:10:24.616787  510025 out.go:203] 
	W1123 10:10:24.619625  510025 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 10:10:24.622595  510025 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 10:10:24.625465  510025 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-706028" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.409822308Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.413521735Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.413557945Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.413586894Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.416820073Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.416855101Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.416877871Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.42019545Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.420232431Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.420260911Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.423849953Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.423887501Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.156132987Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=82406df9-d319-4914-a8c6-4c70407cc01d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.157026795Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f63bef76-e816-44bd-a067-0bdde95e8a07 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.158661474Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5/dashboard-metrics-scraper" id=a4909f85-993d-44f7-b210-d2dededfe71e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.158774124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.16752103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.16823022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.197946863Z" level=info msg="Created container c5ec8602847c185ff0bd5b175bcda823368a9380dc871c5be2c3ac84fd5e4292: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5/dashboard-metrics-scraper" id=a4909f85-993d-44f7-b210-d2dededfe71e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.199035727Z" level=info msg="Starting container: c5ec8602847c185ff0bd5b175bcda823368a9380dc871c5be2c3ac84fd5e4292" id=1a49505e-ad39-4e92-ae0f-6329f916bc41 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.200779174Z" level=info msg="Started container" PID=1715 containerID=c5ec8602847c185ff0bd5b175bcda823368a9380dc871c5be2c3ac84fd5e4292 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5/dashboard-metrics-scraper id=1a49505e-ad39-4e92-ae0f-6329f916bc41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bac9a935316662bcb0069d1475868d37801a5fdb3080582ee6ef356602e0cb73
	Nov 23 10:10:28 old-k8s-version-706028 conmon[1712]: conmon c5ec8602847c185ff0bd <ninfo>: container 1715 exited with status 1
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.451250837Z" level=info msg="Removing container: 5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d" id=8e0e260c-e602-4209-8aed-585d85efd3fe name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.4582306Z" level=info msg="Error loading conmon cgroup of container 5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d: cgroup deleted" id=8e0e260c-e602-4209-8aed-585d85efd3fe name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.464172502Z" level=info msg="Removed container 5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5/dashboard-metrics-scraper" id=8e0e260c-e602-4209-8aed-585d85efd3fe name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	c5ec8602847c1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago       Exited              dashboard-metrics-scraper   2                   bac9a93531666       dashboard-metrics-scraper-5f989dc9cf-dwlf5       kubernetes-dashboard
	9bf3dd205682e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   370b1d541e031       storage-provisioner                              kube-system
	02db52ed7a4e5       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   3acafafed5924       kubernetes-dashboard-8694d4445c-w7rtb            kubernetes-dashboard
	a4b4dbcba8f37       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   2d2d8aebb176f       busybox                                          default
	1bf03ed5a3dee       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           56 seconds ago       Running             coredns                     1                   a302a10760625       coredns-5dd5756b68-h6b8n                         kube-system
	f8f5c2f8b84b2       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   4f04c938ea883       kube-proxy-s9rqv                                 kube-system
	b44546f54a873       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   9efcd39fbf36d       kindnet-6l8w5                                    kube-system
	828cd3adcf6b5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   370b1d541e031       storage-provisioner                              kube-system
	34ee70a0be166       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   cccd2d861c9eb       kube-controller-manager-old-k8s-version-706028   kube-system
	98f50d387d5b2       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   b61608640ef5e       kube-apiserver-old-k8s-version-706028            kube-system
	676b2dbee75ee       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   7db8d8e747c63       etcd-old-k8s-version-706028                      kube-system
	ea67be45b14c0       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   320e2bd47fb58       kube-scheduler-old-k8s-version-706028            kube-system
	
	
	==> coredns [1bf03ed5a3dee20793e8e504c18ad29f97cbbd2454a960d77c4e4dfe52e1dde9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56063 - 56118 "HINFO IN 2942545621513710047.1369116234846632473. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031578699s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-706028
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-706028
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=old-k8s-version-706028
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_08_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:08:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-706028
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:10:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:10:13 +0000   Sun, 23 Nov 2025 10:08:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:10:13 +0000   Sun, 23 Nov 2025 10:08:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:10:13 +0000   Sun, 23 Nov 2025 10:08:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:10:13 +0000   Sun, 23 Nov 2025 10:08:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-706028
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                1d4707fe-e85e-433b-aa40-17ce9a4af156
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-5dd5756b68-h6b8n                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m
	  kube-system                 etcd-old-k8s-version-706028                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m12s
	  kube-system                 kindnet-6l8w5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m1s
	  kube-system                 kube-apiserver-old-k8s-version-706028             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-controller-manager-old-k8s-version-706028    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-proxy-s9rqv                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-scheduler-old-k8s-version-706028             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-dwlf5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-w7rtb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 119s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m21s (x8 over 2m21s)  kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x8 over 2m21s)  kubelet          Node old-k8s-version-706028 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x8 over 2m21s)  kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m12s                  kubelet          Node old-k8s-version-706028 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s                  kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m12s                  kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m1s                   node-controller  Node old-k8s-version-706028 event: Registered Node old-k8s-version-706028 in Controller
	  Normal  NodeReady                105s                   kubelet          Node old-k8s-version-706028 status is now: NodeReady
	  Normal  Starting                 65s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node old-k8s-version-706028 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-706028 event: Registered Node old-k8s-version-706028 in Controller
	
	
	==> dmesg <==
	[Nov23 09:46] overlayfs: idmapped layers are currently not supported
	[ +17.278795] overlayfs: idmapped layers are currently not supported
	[Nov23 09:47] overlayfs: idmapped layers are currently not supported
	[ +12.563591] hrtimer: interrupt took 4093727 ns
	[ +14.190024] overlayfs: idmapped layers are currently not supported
	[Nov23 09:49] overlayfs: idmapped layers are currently not supported
	[Nov23 09:50] overlayfs: idmapped layers are currently not supported
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [676b2dbee75eee912c3a604195863ba16974dcbd9b686ff17513a405a42b3e91] <==
	{"level":"info","ts":"2025-11-23T10:09:35.626521Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T10:09:35.626609Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T10:09:35.6265Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T10:09:35.658035Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T10:09:35.65808Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T10:09:35.78476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T10:09:35.784817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T10:09:35.784835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T10:09:35.784847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T10:09:35.784853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T10:09:35.784863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-23T10:09:35.784871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T10:09:35.80031Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-706028 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T10:09:35.800355Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:09:35.861018Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T10:09:35.890884Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:09:35.892332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T10:09:35.961485Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T10:09:35.961537Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T10:09:42.773118Z","caller":"traceutil/trace.go:171","msg":"trace[338250777] linearizableReadLoop","detail":"{readStateIndex:507; appliedIndex:506; }","duration":"125.795602ms","start":"2025-11-23T10:09:42.647306Z","end":"2025-11-23T10:09:42.773101Z","steps":["trace[338250777] 'read index received'  (duration: 125.640457ms)","trace[338250777] 'applied index is now lower than readState.Index'  (duration: 154.612µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T10:09:42.773349Z","caller":"traceutil/trace.go:171","msg":"trace[553637842] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"138.274154ms","start":"2025-11-23T10:09:42.635066Z","end":"2025-11-23T10:09:42.773341Z","steps":["trace[553637842] 'process raft request'  (duration: 137.930205ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T10:09:42.773661Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.027439ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/old-k8s-version-706028.187a9af35da5cb79\" ","response":"range_response_count:1 size:755"}
	{"level":"info","ts":"2025-11-23T10:09:42.773706Z","caller":"traceutil/trace.go:171","msg":"trace[1127541121] range","detail":"{range_begin:/registry/events/default/old-k8s-version-706028.187a9af35da5cb79; range_end:; response_count:1; response_revision:485; }","duration":"107.085566ms","start":"2025-11-23T10:09:42.66661Z","end":"2025-11-23T10:09:42.773695Z","steps":["trace[1127541121] 'agreement among raft nodes before linearized reading'  (duration: 106.984789ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T10:09:42.773845Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.562253ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" ","response":"range_response_count:1 size:840"}
	{"level":"info","ts":"2025-11-23T10:09:42.773864Z","caller":"traceutil/trace.go:171","msg":"trace[948649192] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:1; response_revision:485; }","duration":"126.583005ms","start":"2025-11-23T10:09:42.647276Z","end":"2025-11-23T10:09:42.773859Z","steps":["trace[948649192] 'agreement among raft nodes before linearized reading'  (duration: 126.540666ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:10:40 up  2:53,  0 user,  load average: 4.71, 4.41, 3.37
	Linux old-k8s-version-706028 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b44546f54a873112a74f2a82e7c9a205fd2e9c0e40cacf6ffa55b2b473ef0d36] <==
	I1123 10:09:43.189586       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:09:43.189957       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:09:43.190114       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:09:43.190153       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:09:43.190189       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:09:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:09:43.403699       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:09:43.404162       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:09:43.404218       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:09:43.404374       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:10:13.403769       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:10:13.406284       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 10:10:13.406397       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 10:10:13.406508       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 10:10:15.006896       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:10:15.007030       1 metrics.go:72] Registering metrics
	I1123 10:10:15.007250       1 controller.go:711] "Syncing nftables rules"
	I1123 10:10:23.403393       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:10:23.403457       1 main.go:301] handling current node
	I1123 10:10:33.403294       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:10:33.403414       1 main.go:301] handling current node
	
	
	==> kube-apiserver [98f50d387d5b2fded7f07e260ceb83bce5a609dc2bd07303f78f93578f6d82ed] <==
	I1123 10:09:42.153624       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 10:09:42.155834       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 10:09:42.164694       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 10:09:42.182918       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:09:42.194203       1 aggregator.go:166] initial CRD sync complete...
	I1123 10:09:42.194401       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 10:09:42.194450       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:09:42.194472       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:09:42.297899       1 trace.go:236] Trace[2022374978]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:162c2b0b-14ab-44c0-a5e9-bb2747f2fd3e,client:192.168.76.2,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (23-Nov-2025 10:09:41.625) (total time: 672ms):
	Trace[2022374978]: [672.462357ms] [672.462357ms] END
	I1123 10:09:42.383684       1 trace.go:236] Trace[782550623]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d040da8c-7356-4130-8d69-283d4c115d2f,client:192.168.76.2,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (23-Nov-2025 10:09:41.399) (total time: 984ms):
	Trace[782550623]: ---"Write to database call failed" len:4139,err:nodes "old-k8s-version-706028" already exists 140ms (10:09:42.383)
	Trace[782550623]: [984.591332ms] [984.591332ms] END
	E1123 10:09:42.511951       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 10:09:42.563585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:09:45.807391       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 10:09:45.874955       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 10:09:45.908452       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:09:45.930079       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:09:45.942808       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 10:09:46.009264       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.114.62"}
	I1123 10:09:46.047323       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.210.11"}
	I1123 10:09:55.960267       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:09:56.211513       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 10:09:56.282836       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [34ee70a0be166e12e57fb579eaa0cb22b8873a626bdc6ae8d83d81bfcbff7280] <==
	I1123 10:09:56.219875       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1123 10:09:56.227728       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1123 10:09:56.357688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="447.143096ms"
	I1123 10:09:56.357806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.175µs"
	I1123 10:09:56.358543       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:09:56.366929       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:09:56.366958       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 10:09:56.372632       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-w7rtb"
	I1123 10:09:56.398811       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-dwlf5"
	I1123 10:09:56.419346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="199.192004ms"
	I1123 10:09:56.432489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="211.957336ms"
	I1123 10:09:56.437734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.332198ms"
	I1123 10:09:56.437816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="40.657µs"
	I1123 10:09:56.459066       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="79.213µs"
	I1123 10:09:56.466856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.310296ms"
	I1123 10:09:56.466932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.233µs"
	I1123 10:09:56.475415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.018µs"
	I1123 10:10:04.408922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.51682ms"
	I1123 10:10:04.409041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.014µs"
	I1123 10:10:08.431790       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="801.097µs"
	I1123 10:10:09.434234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.219µs"
	I1123 10:10:10.415424       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.93µs"
	I1123 10:10:22.832904       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.443428ms"
	I1123 10:10:22.834334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.336µs"
	I1123 10:10:28.477845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.417µs"
	
	
	==> kube-proxy [f8f5c2f8b84b2f925f1dac344595832b43b0211b004448a1db7b9c23faf52228] <==
	I1123 10:09:43.973388       1 server_others.go:69] "Using iptables proxy"
	I1123 10:09:44.151281       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 10:09:44.436950       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:09:44.443611       1 server_others.go:152] "Using iptables Proxier"
	I1123 10:09:44.443653       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 10:09:44.443661       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 10:09:44.443689       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 10:09:44.443888       1 server.go:846] "Version info" version="v1.28.0"
	I1123 10:09:44.443905       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:09:44.445362       1 config.go:188] "Starting service config controller"
	I1123 10:09:44.445394       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 10:09:44.445437       1 config.go:97] "Starting endpoint slice config controller"
	I1123 10:09:44.445442       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 10:09:44.445878       1 config.go:315] "Starting node config controller"
	I1123 10:09:44.445885       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 10:09:44.545593       1 shared_informer.go:318] Caches are synced for service config
	I1123 10:09:44.546124       1 shared_informer.go:318] Caches are synced for node config
	I1123 10:09:44.546146       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ea67be45b14c0ca0ac41632b23ebd8095b8b2a16235fddfd8d5a4b1519577720] <==
	I1123 10:09:39.617493       1 serving.go:348] Generated self-signed cert in-memory
	I1123 10:09:43.574892       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1123 10:09:43.574990       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:09:43.617214       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1123 10:09:43.617325       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1123 10:09:43.617344       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1123 10:09:43.617364       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1123 10:09:43.618973       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:09:43.619002       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 10:09:43.619018       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:09:43.619022       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1123 10:09:43.818605       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1123 10:09:43.820279       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1123 10:09:43.820282       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 10:09:56 old-k8s-version-706028 kubelet[787]: I1123 10:09:56.415919     787 topology_manager.go:215] "Topology Admit Handler" podUID="c818242a-19c9-4be3-995d-fe06e5960ea5" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-dwlf5"
	Nov 23 10:09:56 old-k8s-version-706028 kubelet[787]: I1123 10:09:56.599573     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c818242a-19c9-4be3-995d-fe06e5960ea5-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-dwlf5\" (UID: \"c818242a-19c9-4be3-995d-fe06e5960ea5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5"
	Nov 23 10:09:56 old-k8s-version-706028 kubelet[787]: I1123 10:09:56.599651     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f7af7097-20f5-4919-86c3-74411c41cfb0-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-w7rtb\" (UID: \"f7af7097-20f5-4919-86c3-74411c41cfb0\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-w7rtb"
	Nov 23 10:09:56 old-k8s-version-706028 kubelet[787]: I1123 10:09:56.599682     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knps7\" (UniqueName: \"kubernetes.io/projected/c818242a-19c9-4be3-995d-fe06e5960ea5-kube-api-access-knps7\") pod \"dashboard-metrics-scraper-5f989dc9cf-dwlf5\" (UID: \"c818242a-19c9-4be3-995d-fe06e5960ea5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5"
	Nov 23 10:09:56 old-k8s-version-706028 kubelet[787]: I1123 10:09:56.599710     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnlnn\" (UniqueName: \"kubernetes.io/projected/f7af7097-20f5-4919-86c3-74411c41cfb0-kube-api-access-tnlnn\") pod \"kubernetes-dashboard-8694d4445c-w7rtb\" (UID: \"f7af7097-20f5-4919-86c3-74411c41cfb0\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-w7rtb"
	Nov 23 10:09:57 old-k8s-version-706028 kubelet[787]: W1123 10:09:57.046807     787 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/crio-3acafafed59245a96b1a669d20e3101629f020f43de2ab26f92952ca03218e4c WatchSource:0}: Error finding container 3acafafed59245a96b1a669d20e3101629f020f43de2ab26f92952ca03218e4c: Status 404 returned error can't find the container with id 3acafafed59245a96b1a669d20e3101629f020f43de2ab26f92952ca03218e4c
	Nov 23 10:09:57 old-k8s-version-706028 kubelet[787]: W1123 10:09:57.075767     787 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/crio-bac9a935316662bcb0069d1475868d37801a5fdb3080582ee6ef356602e0cb73 WatchSource:0}: Error finding container bac9a935316662bcb0069d1475868d37801a5fdb3080582ee6ef356602e0cb73: Status 404 returned error can't find the container with id bac9a935316662bcb0069d1475868d37801a5fdb3080582ee6ef356602e0cb73
	Nov 23 10:10:08 old-k8s-version-706028 kubelet[787]: I1123 10:10:08.390685     787 scope.go:117] "RemoveContainer" containerID="7dcc8d924b9cd2fb918b8b13ea22be4b9d134f5d00ef36c2f66ad76d0e0830b4"
	Nov 23 10:10:08 old-k8s-version-706028 kubelet[787]: I1123 10:10:08.428261     787 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-w7rtb" podStartSLOduration=6.082243225 podCreationTimestamp="2025-11-23 10:09:56 +0000 UTC" firstStartedPulling="2025-11-23 10:09:57.051753636 +0000 UTC m=+23.235008862" lastFinishedPulling="2025-11-23 10:10:03.394462593 +0000 UTC m=+29.577717811" observedRunningTime="2025-11-23 10:10:04.402388311 +0000 UTC m=+30.585643529" watchObservedRunningTime="2025-11-23 10:10:08.424952174 +0000 UTC m=+34.608207400"
	Nov 23 10:10:09 old-k8s-version-706028 kubelet[787]: I1123 10:10:09.395698     787 scope.go:117] "RemoveContainer" containerID="5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d"
	Nov 23 10:10:09 old-k8s-version-706028 kubelet[787]: E1123 10:10:09.396077     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dwlf5_kubernetes-dashboard(c818242a-19c9-4be3-995d-fe06e5960ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5" podUID="c818242a-19c9-4be3-995d-fe06e5960ea5"
	Nov 23 10:10:09 old-k8s-version-706028 kubelet[787]: I1123 10:10:09.396592     787 scope.go:117] "RemoveContainer" containerID="7dcc8d924b9cd2fb918b8b13ea22be4b9d134f5d00ef36c2f66ad76d0e0830b4"
	Nov 23 10:10:10 old-k8s-version-706028 kubelet[787]: I1123 10:10:10.399615     787 scope.go:117] "RemoveContainer" containerID="5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d"
	Nov 23 10:10:10 old-k8s-version-706028 kubelet[787]: E1123 10:10:10.399889     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dwlf5_kubernetes-dashboard(c818242a-19c9-4be3-995d-fe06e5960ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5" podUID="c818242a-19c9-4be3-995d-fe06e5960ea5"
	Nov 23 10:10:14 old-k8s-version-706028 kubelet[787]: I1123 10:10:14.410038     787 scope.go:117] "RemoveContainer" containerID="828cd3adcf6b5681aa7d384f69cb7566664e59a1ab84ee837327f44e3e645dfc"
	Nov 23 10:10:17 old-k8s-version-706028 kubelet[787]: I1123 10:10:17.018199     787 scope.go:117] "RemoveContainer" containerID="5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d"
	Nov 23 10:10:17 old-k8s-version-706028 kubelet[787]: E1123 10:10:17.018566     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dwlf5_kubernetes-dashboard(c818242a-19c9-4be3-995d-fe06e5960ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5" podUID="c818242a-19c9-4be3-995d-fe06e5960ea5"
	Nov 23 10:10:28 old-k8s-version-706028 kubelet[787]: I1123 10:10:28.155098     787 scope.go:117] "RemoveContainer" containerID="5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d"
	Nov 23 10:10:28 old-k8s-version-706028 kubelet[787]: I1123 10:10:28.448765     787 scope.go:117] "RemoveContainer" containerID="5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d"
	Nov 23 10:10:28 old-k8s-version-706028 kubelet[787]: I1123 10:10:28.449049     787 scope.go:117] "RemoveContainer" containerID="c5ec8602847c185ff0bd5b175bcda823368a9380dc871c5be2c3ac84fd5e4292"
	Nov 23 10:10:28 old-k8s-version-706028 kubelet[787]: E1123 10:10:28.449347     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dwlf5_kubernetes-dashboard(c818242a-19c9-4be3-995d-fe06e5960ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5" podUID="c818242a-19c9-4be3-995d-fe06e5960ea5"
	Nov 23 10:10:36 old-k8s-version-706028 kubelet[787]: I1123 10:10:36.958515     787 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 10:10:36 old-k8s-version-706028 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:10:37 old-k8s-version-706028 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:10:37 old-k8s-version-706028 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [02db52ed7a4e551150e8645311a4cfd60769ef5552d108ac02a63489a373aba2] <==
	2025/11/23 10:10:03 Using namespace: kubernetes-dashboard
	2025/11/23 10:10:03 Using in-cluster config to connect to apiserver
	2025/11/23 10:10:03 Using secret token for csrf signing
	2025/11/23 10:10:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:10:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:10:03 Successful initial request to the apiserver, version: v1.28.0
	2025/11/23 10:10:03 Generating JWE encryption key
	2025/11/23 10:10:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:10:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:10:03 Initializing JWE encryption key from synchronized object
	2025/11/23 10:10:03 Creating in-cluster Sidecar client
	2025/11/23 10:10:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:10:03 Serving insecurely on HTTP port: 9090
	2025/11/23 10:10:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:10:03 Starting overwatch
	
	
	==> storage-provisioner [828cd3adcf6b5681aa7d384f69cb7566664e59a1ab84ee837327f44e3e645dfc] <==
	I1123 10:09:43.660151       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:10:13.725774       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9bf3dd205682ea3296e952ceb1dadbbe4532b2c1e06757abe529e2af9a50d562] <==
	I1123 10:10:14.466406       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:10:14.486435       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:10:14.487228       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 10:10:31.885142       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:10:31.885430       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706028_b91ca60c-78f2-4d67-9f45-a34c10d662b4!
	I1123 10:10:31.886107       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6338f536-3941-4183-9bc9-75c073ed286e", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-706028_b91ca60c-78f2-4d67-9f45-a34c10d662b4 became leader
	I1123 10:10:31.986143       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706028_b91ca60c-78f2-4d67-9f45-a34c10d662b4!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-706028 -n old-k8s-version-706028
E1123 10:10:40.614418  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:10:40.622311  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:10:40.633671  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:10:40.656308  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:10:40.697798  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:10:40.779146  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:10:40.940666  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-706028 -n old-k8s-version-706028: exit status 2 (465.532698ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-706028 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-706028
helpers_test.go:243: (dbg) docker inspect old-k8s-version-706028:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5",
	        "Created": "2025-11-23T10:08:00.027667236Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 510168,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:09:26.542902134Z",
	            "FinishedAt": "2025-11-23T10:09:24.400275811Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/hosts",
	        "LogPath": "/var/lib/docker/containers/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5-json.log",
	        "Name": "/old-k8s-version-706028",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-706028:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-706028",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5",
	                "LowerDir": "/var/lib/docker/overlay2/4fc786c1031046370668829710493e9535cd397f4cc7ed5d9f51a091e2219a9e-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4fc786c1031046370668829710493e9535cd397f4cc7ed5d9f51a091e2219a9e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4fc786c1031046370668829710493e9535cd397f4cc7ed5d9f51a091e2219a9e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4fc786c1031046370668829710493e9535cd397f4cc7ed5d9f51a091e2219a9e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-706028",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-706028/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-706028",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-706028",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-706028",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "baaa0cc534cbbf7405ae4da7621b549237687412c070314472d691f1a5b76d6e",
	            "SandboxKey": "/var/run/docker/netns/baaa0cc534cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-706028": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:9b:30:a3:33:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "38827229c06574d77dd6a72b1084a1de5267d818d9a4bc2e2e69c7834d9baf50",
	                    "EndpointID": "cb45cd812b1e7d00ec7c8bfe3737b03e040183cd7549ebc5db08a2f7512eec58",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-706028",
	                        "ec71fb4cb0c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706028 -n old-k8s-version-706028
E1123 10:10:41.262248  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706028 -n old-k8s-version-706028: exit status 2 (341.176693ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-706028 logs -n 25
E1123 10:10:41.833997  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:10:41.904454  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-706028 logs -n 25: (1.377047004s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p calico-507563 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cri-dockerd --version                                                                                                                                                                                                   │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │                     │
	│ ssh     │ -p calico-507563 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo containerd config dump                                                                                                                                                                                                  │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo crio config                                                                                                                                                                                                             │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ delete  │ -p calico-507563                                                                                                                                                                                                                              │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	│ stop    │ -p old-k8s-version-706028 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-706028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p old-k8s-version-706028 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-020224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ stop    │ -p no-preload-020224 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ image   │ old-k8s-version-706028 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ pause   │ -p old-k8s-version-706028 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020224 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:10:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:10:39.586486  514436 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:10:39.586637  514436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:39.586648  514436 out.go:374] Setting ErrFile to fd 2...
	I1123 10:10:39.586654  514436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:39.586921  514436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:10:39.588433  514436 out.go:368] Setting JSON to false
	I1123 10:10:39.589483  514436 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10389,"bootTime":1763882251,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:10:39.589549  514436 start.go:143] virtualization:  
	I1123 10:10:39.594997  514436 out.go:179] * [no-preload-020224] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:10:39.598293  514436 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:10:39.598357  514436 notify.go:221] Checking for updates...
	I1123 10:10:39.604183  514436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:10:39.607212  514436 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:10:39.610141  514436 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:10:39.613060  514436 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:10:39.616014  514436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:10:39.619354  514436 config.go:182] Loaded profile config "no-preload-020224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:39.619966  514436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:10:39.654751  514436 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:10:39.654882  514436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:10:39.752649  514436 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:10:39.742054165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:10:39.752754  514436 docker.go:319] overlay module found
	I1123 10:10:39.755906  514436 out.go:179] * Using the docker driver based on existing profile
	I1123 10:10:39.758679  514436 start.go:309] selected driver: docker
	I1123 10:10:39.758700  514436 start.go:927] validating driver "docker" against &{Name:no-preload-020224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:10:39.758803  514436 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:10:39.759519  514436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:10:39.848906  514436 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:10:39.83821348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:10:39.849241  514436 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:10:39.849276  514436 cni.go:84] Creating CNI manager for ""
	I1123 10:10:39.849336  514436 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:10:39.849380  514436 start.go:353] cluster config:
	{Name:no-preload-020224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:10:39.852625  514436 out.go:179] * Starting "no-preload-020224" primary control-plane node in "no-preload-020224" cluster
	I1123 10:10:39.855414  514436 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:10:39.858384  514436 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:10:39.861498  514436 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:10:39.861664  514436 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/config.json ...
	I1123 10:10:39.862009  514436 cache.go:107] acquiring lock: {Name:mk85a7ea341b7b22f7144b443067338b93f1733a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:10:39.862092  514436 cache.go:115] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 10:10:39.862106  514436 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.7µs
	I1123 10:10:39.862124  514436 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 10:10:39.862136  514436 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:10:39.862303  514436 cache.go:107] acquiring lock: {Name:mkaa5c4da3e01760d2e809ef3deba3927b072661 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:10:39.862355  514436 cache.go:115] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 10:10:39.862366  514436 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 68.046µs
	I1123 10:10:39.862379  514436 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 10:10:39.862390  514436 cache.go:107] acquiring lock: {Name:mk6dbb06f379574109993e0f18706986a896189d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:10:39.862423  514436 cache.go:115] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 10:10:39.862433  514436 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 43.996µs
	I1123 10:10:39.862439  514436 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 10:10:39.862448  514436 cache.go:107] acquiring lock: {Name:mkf85ca10e1c40480156040157763a03d84ef922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:10:39.862477  514436 cache.go:115] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 10:10:39.862487  514436 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 40.452µs
	I1123 10:10:39.862493  514436 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 10:10:39.862502  514436 cache.go:107] acquiring lock: {Name:mka916dc9fc4585e18fed462a4e6c4c2236e466b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:10:39.862529  514436 cache.go:115] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 10:10:39.862537  514436 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 35.43µs
	I1123 10:10:39.862543  514436 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 10:10:39.862552  514436 cache.go:107] acquiring lock: {Name:mk4b36753df55ff24d49ddb99313394a283546fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:10:39.862582  514436 cache.go:115] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 10:10:39.862593  514436 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 40.805µs
	I1123 10:10:39.862599  514436 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 10:10:39.862608  514436 cache.go:107] acquiring lock: {Name:mk0a81679e590fdd4a9198b9f7bcc6fd7b402dd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:10:39.862642  514436 cache.go:115] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1123 10:10:39.862651  514436 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 44.653µs
	I1123 10:10:39.862657  514436 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 10:10:39.862680  514436 cache.go:107] acquiring lock: {Name:mk5e8535a6036e26b37940c711fe2645a974c77b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:10:39.862712  514436 cache.go:115] /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 10:10:39.862720  514436 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 49.724µs
	I1123 10:10:39.862743  514436 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 10:10:39.862750  514436 cache.go:87] Successfully saved all images to host disk.
	I1123 10:10:39.884831  514436 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:10:39.884855  514436 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:10:39.884870  514436 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:10:39.884900  514436 start.go:360] acquireMachinesLock for no-preload-020224: {Name:mk7ef0b074cfea77847aa1186cdbc84a0a684281 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:10:39.884956  514436 start.go:364] duration metric: took 35.98µs to acquireMachinesLock for "no-preload-020224"
	I1123 10:10:39.884980  514436 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:10:39.884985  514436 fix.go:54] fixHost starting: 
	I1123 10:10:39.885454  514436 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:10:39.906368  514436 fix.go:112] recreateIfNeeded on no-preload-020224: state=Stopped err=<nil>
	W1123 10:10:39.906425  514436 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.409822308Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.413521735Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.413557945Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.413586894Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.416820073Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.416855101Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.416877871Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.42019545Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.420232431Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.420260911Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.423849953Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:10:23 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:23.423887501Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.156132987Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=82406df9-d319-4914-a8c6-4c70407cc01d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.157026795Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f63bef76-e816-44bd-a067-0bdde95e8a07 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.158661474Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5/dashboard-metrics-scraper" id=a4909f85-993d-44f7-b210-d2dededfe71e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.158774124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.16752103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.16823022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.197946863Z" level=info msg="Created container c5ec8602847c185ff0bd5b175bcda823368a9380dc871c5be2c3ac84fd5e4292: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5/dashboard-metrics-scraper" id=a4909f85-993d-44f7-b210-d2dededfe71e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.199035727Z" level=info msg="Starting container: c5ec8602847c185ff0bd5b175bcda823368a9380dc871c5be2c3ac84fd5e4292" id=1a49505e-ad39-4e92-ae0f-6329f916bc41 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.200779174Z" level=info msg="Started container" PID=1715 containerID=c5ec8602847c185ff0bd5b175bcda823368a9380dc871c5be2c3ac84fd5e4292 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5/dashboard-metrics-scraper id=1a49505e-ad39-4e92-ae0f-6329f916bc41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bac9a935316662bcb0069d1475868d37801a5fdb3080582ee6ef356602e0cb73
	Nov 23 10:10:28 old-k8s-version-706028 conmon[1712]: conmon c5ec8602847c185ff0bd <ninfo>: container 1715 exited with status 1
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.451250837Z" level=info msg="Removing container: 5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d" id=8e0e260c-e602-4209-8aed-585d85efd3fe name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.4582306Z" level=info msg="Error loading conmon cgroup of container 5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d: cgroup deleted" id=8e0e260c-e602-4209-8aed-585d85efd3fe name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:10:28 old-k8s-version-706028 crio[657]: time="2025-11-23T10:10:28.464172502Z" level=info msg="Removed container 5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5/dashboard-metrics-scraper" id=8e0e260c-e602-4209-8aed-585d85efd3fe name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	c5ec8602847c1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago       Exited              dashboard-metrics-scraper   2                   bac9a93531666       dashboard-metrics-scraper-5f989dc9cf-dwlf5       kubernetes-dashboard
	9bf3dd205682e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   370b1d541e031       storage-provisioner                              kube-system
	02db52ed7a4e5       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago       Running             kubernetes-dashboard        0                   3acafafed5924       kubernetes-dashboard-8694d4445c-w7rtb            kubernetes-dashboard
	a4b4dbcba8f37       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   2d2d8aebb176f       busybox                                          default
	1bf03ed5a3dee       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           59 seconds ago       Running             coredns                     1                   a302a10760625       coredns-5dd5756b68-h6b8n                         kube-system
	f8f5c2f8b84b2       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           59 seconds ago       Running             kube-proxy                  1                   4f04c938ea883       kube-proxy-s9rqv                                 kube-system
	b44546f54a873       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   9efcd39fbf36d       kindnet-6l8w5                                    kube-system
	828cd3adcf6b5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   370b1d541e031       storage-provisioner                              kube-system
	34ee70a0be166       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   cccd2d861c9eb       kube-controller-manager-old-k8s-version-706028   kube-system
	98f50d387d5b2       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   b61608640ef5e       kube-apiserver-old-k8s-version-706028            kube-system
	676b2dbee75ee       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   7db8d8e747c63       etcd-old-k8s-version-706028                      kube-system
	ea67be45b14c0       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   320e2bd47fb58       kube-scheduler-old-k8s-version-706028            kube-system
	
	
	==> coredns [1bf03ed5a3dee20793e8e504c18ad29f97cbbd2454a960d77c4e4dfe52e1dde9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56063 - 56118 "HINFO IN 2942545621513710047.1369116234846632473. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031578699s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-706028
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-706028
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=old-k8s-version-706028
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_08_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:08:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-706028
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:10:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:10:13 +0000   Sun, 23 Nov 2025 10:08:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:10:13 +0000   Sun, 23 Nov 2025 10:08:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:10:13 +0000   Sun, 23 Nov 2025 10:08:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:10:13 +0000   Sun, 23 Nov 2025 10:08:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-706028
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                1d4707fe-e85e-433b-aa40-17ce9a4af156
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 coredns-5dd5756b68-h6b8n                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m3s
	  kube-system                 etcd-old-k8s-version-706028                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m15s
	  kube-system                 kindnet-6l8w5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m4s
	  kube-system                 kube-apiserver-old-k8s-version-706028             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-controller-manager-old-k8s-version-706028    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-proxy-s9rqv                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-scheduler-old-k8s-version-706028             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-dwlf5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-w7rtb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m2s                   kube-proxy       
	  Normal  Starting                 58s                    kube-proxy       
	  Normal  Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m24s (x8 over 2m24s)  kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m24s (x8 over 2m24s)  kubelet          Node old-k8s-version-706028 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m24s (x8 over 2m24s)  kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m15s                  kubelet          Node old-k8s-version-706028 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s                  kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m15s                  kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m4s                   node-controller  Node old-k8s-version-706028 event: Registered Node old-k8s-version-706028 in Controller
	  Normal  NodeReady                108s                   kubelet          Node old-k8s-version-706028 status is now: NodeReady
	  Normal  Starting                 68s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node old-k8s-version-706028 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node old-k8s-version-706028 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                    node-controller  Node old-k8s-version-706028 event: Registered Node old-k8s-version-706028 in Controller
	
	
	==> dmesg <==
	[Nov23 09:46] overlayfs: idmapped layers are currently not supported
	[ +17.278795] overlayfs: idmapped layers are currently not supported
	[Nov23 09:47] overlayfs: idmapped layers are currently not supported
	[ +12.563591] hrtimer: interrupt took 4093727 ns
	[ +14.190024] overlayfs: idmapped layers are currently not supported
	[Nov23 09:49] overlayfs: idmapped layers are currently not supported
	[Nov23 09:50] overlayfs: idmapped layers are currently not supported
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [676b2dbee75eee912c3a604195863ba16974dcbd9b686ff17513a405a42b3e91] <==
	{"level":"info","ts":"2025-11-23T10:09:35.626521Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T10:09:35.626609Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T10:09:35.6265Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T10:09:35.658035Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T10:09:35.65808Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T10:09:35.78476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T10:09:35.784817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T10:09:35.784835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T10:09:35.784847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T10:09:35.784853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T10:09:35.784863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-23T10:09:35.784871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T10:09:35.80031Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-706028 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T10:09:35.800355Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:09:35.861018Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T10:09:35.890884Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:09:35.892332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T10:09:35.961485Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T10:09:35.961537Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T10:09:42.773118Z","caller":"traceutil/trace.go:171","msg":"trace[338250777] linearizableReadLoop","detail":"{readStateIndex:507; appliedIndex:506; }","duration":"125.795602ms","start":"2025-11-23T10:09:42.647306Z","end":"2025-11-23T10:09:42.773101Z","steps":["trace[338250777] 'read index received'  (duration: 125.640457ms)","trace[338250777] 'applied index is now lower than readState.Index'  (duration: 154.612µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T10:09:42.773349Z","caller":"traceutil/trace.go:171","msg":"trace[553637842] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"138.274154ms","start":"2025-11-23T10:09:42.635066Z","end":"2025-11-23T10:09:42.773341Z","steps":["trace[553637842] 'process raft request'  (duration: 137.930205ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T10:09:42.773661Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.027439ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/old-k8s-version-706028.187a9af35da5cb79\" ","response":"range_response_count:1 size:755"}
	{"level":"info","ts":"2025-11-23T10:09:42.773706Z","caller":"traceutil/trace.go:171","msg":"trace[1127541121] range","detail":"{range_begin:/registry/events/default/old-k8s-version-706028.187a9af35da5cb79; range_end:; response_count:1; response_revision:485; }","duration":"107.085566ms","start":"2025-11-23T10:09:42.66661Z","end":"2025-11-23T10:09:42.773695Z","steps":["trace[1127541121] 'agreement among raft nodes before linearized reading'  (duration: 106.984789ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T10:09:42.773845Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.562253ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" ","response":"range_response_count:1 size:840"}
	{"level":"info","ts":"2025-11-23T10:09:42.773864Z","caller":"traceutil/trace.go:171","msg":"trace[948649192] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:1; response_revision:485; }","duration":"126.583005ms","start":"2025-11-23T10:09:42.647276Z","end":"2025-11-23T10:09:42.773859Z","steps":["trace[948649192] 'agreement among raft nodes before linearized reading'  (duration: 126.540666ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:10:42 up  2:53,  0 user,  load average: 4.71, 4.41, 3.37
	Linux old-k8s-version-706028 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b44546f54a873112a74f2a82e7c9a205fd2e9c0e40cacf6ffa55b2b473ef0d36] <==
	I1123 10:09:43.189586       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:09:43.189957       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:09:43.190114       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:09:43.190153       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:09:43.190189       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:09:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:09:43.403699       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:09:43.404162       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:09:43.404218       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:09:43.404374       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:10:13.403769       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:10:13.406284       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 10:10:13.406397       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 10:10:13.406508       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 10:10:15.006896       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:10:15.007030       1 metrics.go:72] Registering metrics
	I1123 10:10:15.007250       1 controller.go:711] "Syncing nftables rules"
	I1123 10:10:23.403393       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:10:23.403457       1 main.go:301] handling current node
	I1123 10:10:33.403294       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:10:33.403414       1 main.go:301] handling current node
	
	
	==> kube-apiserver [98f50d387d5b2fded7f07e260ceb83bce5a609dc2bd07303f78f93578f6d82ed] <==
	I1123 10:09:42.153624       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 10:09:42.155834       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 10:09:42.164694       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 10:09:42.182918       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:09:42.194203       1 aggregator.go:166] initial CRD sync complete...
	I1123 10:09:42.194401       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 10:09:42.194450       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:09:42.194472       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:09:42.297899       1 trace.go:236] Trace[2022374978]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:162c2b0b-14ab-44c0-a5e9-bb2747f2fd3e,client:192.168.76.2,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (23-Nov-2025 10:09:41.625) (total time: 672ms):
	Trace[2022374978]: [672.462357ms] [672.462357ms] END
	I1123 10:09:42.383684       1 trace.go:236] Trace[782550623]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d040da8c-7356-4130-8d69-283d4c115d2f,client:192.168.76.2,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.28.0 (linux/arm64) kubernetes/855e7c4,verb:POST (23-Nov-2025 10:09:41.399) (total time: 984ms):
	Trace[782550623]: ---"Write to database call failed" len:4139,err:nodes "old-k8s-version-706028" already exists 140ms (10:09:42.383)
	Trace[782550623]: [984.591332ms] [984.591332ms] END
	E1123 10:09:42.511951       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 10:09:42.563585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:09:45.807391       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 10:09:45.874955       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 10:09:45.908452       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:09:45.930079       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:09:45.942808       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 10:09:46.009264       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.114.62"}
	I1123 10:09:46.047323       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.210.11"}
	I1123 10:09:55.960267       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:09:56.211513       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 10:09:56.282836       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [34ee70a0be166e12e57fb579eaa0cb22b8873a626bdc6ae8d83d81bfcbff7280] <==
	I1123 10:09:56.219875       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1123 10:09:56.227728       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1123 10:09:56.357688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="447.143096ms"
	I1123 10:09:56.357806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.175µs"
	I1123 10:09:56.358543       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:09:56.366929       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:09:56.366958       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 10:09:56.372632       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-w7rtb"
	I1123 10:09:56.398811       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-dwlf5"
	I1123 10:09:56.419346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="199.192004ms"
	I1123 10:09:56.432489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="211.957336ms"
	I1123 10:09:56.437734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.332198ms"
	I1123 10:09:56.437816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="40.657µs"
	I1123 10:09:56.459066       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="79.213µs"
	I1123 10:09:56.466856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.310296ms"
	I1123 10:09:56.466932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.233µs"
	I1123 10:09:56.475415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.018µs"
	I1123 10:10:04.408922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.51682ms"
	I1123 10:10:04.409041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.014µs"
	I1123 10:10:08.431790       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="801.097µs"
	I1123 10:10:09.434234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.219µs"
	I1123 10:10:10.415424       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.93µs"
	I1123 10:10:22.832904       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.443428ms"
	I1123 10:10:22.834334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.336µs"
	I1123 10:10:28.477845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.417µs"
	
	
	==> kube-proxy [f8f5c2f8b84b2f925f1dac344595832b43b0211b004448a1db7b9c23faf52228] <==
	I1123 10:09:43.973388       1 server_others.go:69] "Using iptables proxy"
	I1123 10:09:44.151281       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 10:09:44.436950       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:09:44.443611       1 server_others.go:152] "Using iptables Proxier"
	I1123 10:09:44.443653       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 10:09:44.443661       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 10:09:44.443689       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 10:09:44.443888       1 server.go:846] "Version info" version="v1.28.0"
	I1123 10:09:44.443905       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:09:44.445362       1 config.go:188] "Starting service config controller"
	I1123 10:09:44.445394       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 10:09:44.445437       1 config.go:97] "Starting endpoint slice config controller"
	I1123 10:09:44.445442       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 10:09:44.445878       1 config.go:315] "Starting node config controller"
	I1123 10:09:44.445885       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 10:09:44.545593       1 shared_informer.go:318] Caches are synced for service config
	I1123 10:09:44.546124       1 shared_informer.go:318] Caches are synced for node config
	I1123 10:09:44.546146       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ea67be45b14c0ca0ac41632b23ebd8095b8b2a16235fddfd8d5a4b1519577720] <==
	I1123 10:09:39.617493       1 serving.go:348] Generated self-signed cert in-memory
	I1123 10:09:43.574892       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1123 10:09:43.574990       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:09:43.617214       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1123 10:09:43.617325       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1123 10:09:43.617344       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1123 10:09:43.617364       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1123 10:09:43.618973       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:09:43.619002       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 10:09:43.619018       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:09:43.619022       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1123 10:09:43.818605       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1123 10:09:43.820279       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1123 10:09:43.820282       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 10:09:56 old-k8s-version-706028 kubelet[787]: I1123 10:09:56.415919     787 topology_manager.go:215] "Topology Admit Handler" podUID="c818242a-19c9-4be3-995d-fe06e5960ea5" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-dwlf5"
	Nov 23 10:09:56 old-k8s-version-706028 kubelet[787]: I1123 10:09:56.599573     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c818242a-19c9-4be3-995d-fe06e5960ea5-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-dwlf5\" (UID: \"c818242a-19c9-4be3-995d-fe06e5960ea5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5"
	Nov 23 10:09:56 old-k8s-version-706028 kubelet[787]: I1123 10:09:56.599651     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f7af7097-20f5-4919-86c3-74411c41cfb0-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-w7rtb\" (UID: \"f7af7097-20f5-4919-86c3-74411c41cfb0\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-w7rtb"
	Nov 23 10:09:56 old-k8s-version-706028 kubelet[787]: I1123 10:09:56.599682     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knps7\" (UniqueName: \"kubernetes.io/projected/c818242a-19c9-4be3-995d-fe06e5960ea5-kube-api-access-knps7\") pod \"dashboard-metrics-scraper-5f989dc9cf-dwlf5\" (UID: \"c818242a-19c9-4be3-995d-fe06e5960ea5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5"
	Nov 23 10:09:56 old-k8s-version-706028 kubelet[787]: I1123 10:09:56.599710     787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnlnn\" (UniqueName: \"kubernetes.io/projected/f7af7097-20f5-4919-86c3-74411c41cfb0-kube-api-access-tnlnn\") pod \"kubernetes-dashboard-8694d4445c-w7rtb\" (UID: \"f7af7097-20f5-4919-86c3-74411c41cfb0\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-w7rtb"
	Nov 23 10:09:57 old-k8s-version-706028 kubelet[787]: W1123 10:09:57.046807     787 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/crio-3acafafed59245a96b1a669d20e3101629f020f43de2ab26f92952ca03218e4c WatchSource:0}: Error finding container 3acafafed59245a96b1a669d20e3101629f020f43de2ab26f92952ca03218e4c: Status 404 returned error can't find the container with id 3acafafed59245a96b1a669d20e3101629f020f43de2ab26f92952ca03218e4c
	Nov 23 10:09:57 old-k8s-version-706028 kubelet[787]: W1123 10:09:57.075767     787 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ec71fb4cb0c2b6caf67f73db9c668df0e978a615cb8dcaff3b8114cb66fa45b5/crio-bac9a935316662bcb0069d1475868d37801a5fdb3080582ee6ef356602e0cb73 WatchSource:0}: Error finding container bac9a935316662bcb0069d1475868d37801a5fdb3080582ee6ef356602e0cb73: Status 404 returned error can't find the container with id bac9a935316662bcb0069d1475868d37801a5fdb3080582ee6ef356602e0cb73
	Nov 23 10:10:08 old-k8s-version-706028 kubelet[787]: I1123 10:10:08.390685     787 scope.go:117] "RemoveContainer" containerID="7dcc8d924b9cd2fb918b8b13ea22be4b9d134f5d00ef36c2f66ad76d0e0830b4"
	Nov 23 10:10:08 old-k8s-version-706028 kubelet[787]: I1123 10:10:08.428261     787 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-w7rtb" podStartSLOduration=6.082243225 podCreationTimestamp="2025-11-23 10:09:56 +0000 UTC" firstStartedPulling="2025-11-23 10:09:57.051753636 +0000 UTC m=+23.235008862" lastFinishedPulling="2025-11-23 10:10:03.394462593 +0000 UTC m=+29.577717811" observedRunningTime="2025-11-23 10:10:04.402388311 +0000 UTC m=+30.585643529" watchObservedRunningTime="2025-11-23 10:10:08.424952174 +0000 UTC m=+34.608207400"
	Nov 23 10:10:09 old-k8s-version-706028 kubelet[787]: I1123 10:10:09.395698     787 scope.go:117] "RemoveContainer" containerID="5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d"
	Nov 23 10:10:09 old-k8s-version-706028 kubelet[787]: E1123 10:10:09.396077     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dwlf5_kubernetes-dashboard(c818242a-19c9-4be3-995d-fe06e5960ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5" podUID="c818242a-19c9-4be3-995d-fe06e5960ea5"
	Nov 23 10:10:09 old-k8s-version-706028 kubelet[787]: I1123 10:10:09.396592     787 scope.go:117] "RemoveContainer" containerID="7dcc8d924b9cd2fb918b8b13ea22be4b9d134f5d00ef36c2f66ad76d0e0830b4"
	Nov 23 10:10:10 old-k8s-version-706028 kubelet[787]: I1123 10:10:10.399615     787 scope.go:117] "RemoveContainer" containerID="5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d"
	Nov 23 10:10:10 old-k8s-version-706028 kubelet[787]: E1123 10:10:10.399889     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dwlf5_kubernetes-dashboard(c818242a-19c9-4be3-995d-fe06e5960ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5" podUID="c818242a-19c9-4be3-995d-fe06e5960ea5"
	Nov 23 10:10:14 old-k8s-version-706028 kubelet[787]: I1123 10:10:14.410038     787 scope.go:117] "RemoveContainer" containerID="828cd3adcf6b5681aa7d384f69cb7566664e59a1ab84ee837327f44e3e645dfc"
	Nov 23 10:10:17 old-k8s-version-706028 kubelet[787]: I1123 10:10:17.018199     787 scope.go:117] "RemoveContainer" containerID="5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d"
	Nov 23 10:10:17 old-k8s-version-706028 kubelet[787]: E1123 10:10:17.018566     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dwlf5_kubernetes-dashboard(c818242a-19c9-4be3-995d-fe06e5960ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5" podUID="c818242a-19c9-4be3-995d-fe06e5960ea5"
	Nov 23 10:10:28 old-k8s-version-706028 kubelet[787]: I1123 10:10:28.155098     787 scope.go:117] "RemoveContainer" containerID="5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d"
	Nov 23 10:10:28 old-k8s-version-706028 kubelet[787]: I1123 10:10:28.448765     787 scope.go:117] "RemoveContainer" containerID="5de7b7bb4ab5b868f88423fe2fc3ba9adcea499999409590c7abb10d70b00d3d"
	Nov 23 10:10:28 old-k8s-version-706028 kubelet[787]: I1123 10:10:28.449049     787 scope.go:117] "RemoveContainer" containerID="c5ec8602847c185ff0bd5b175bcda823368a9380dc871c5be2c3ac84fd5e4292"
	Nov 23 10:10:28 old-k8s-version-706028 kubelet[787]: E1123 10:10:28.449347     787 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dwlf5_kubernetes-dashboard(c818242a-19c9-4be3-995d-fe06e5960ea5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dwlf5" podUID="c818242a-19c9-4be3-995d-fe06e5960ea5"
	Nov 23 10:10:36 old-k8s-version-706028 kubelet[787]: I1123 10:10:36.958515     787 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 10:10:36 old-k8s-version-706028 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:10:37 old-k8s-version-706028 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:10:37 old-k8s-version-706028 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [02db52ed7a4e551150e8645311a4cfd60769ef5552d108ac02a63489a373aba2] <==
	2025/11/23 10:10:03 Starting overwatch
	2025/11/23 10:10:03 Using namespace: kubernetes-dashboard
	2025/11/23 10:10:03 Using in-cluster config to connect to apiserver
	2025/11/23 10:10:03 Using secret token for csrf signing
	2025/11/23 10:10:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:10:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:10:03 Successful initial request to the apiserver, version: v1.28.0
	2025/11/23 10:10:03 Generating JWE encryption key
	2025/11/23 10:10:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:10:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:10:03 Initializing JWE encryption key from synchronized object
	2025/11/23 10:10:03 Creating in-cluster Sidecar client
	2025/11/23 10:10:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:10:03 Serving insecurely on HTTP port: 9090
	2025/11/23 10:10:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [828cd3adcf6b5681aa7d384f69cb7566664e59a1ab84ee837327f44e3e645dfc] <==
	I1123 10:09:43.660151       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:10:13.725774       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9bf3dd205682ea3296e952ceb1dadbbe4532b2c1e06757abe529e2af9a50d562] <==
	I1123 10:10:14.466406       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:10:14.486435       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:10:14.487228       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 10:10:31.885142       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:10:31.885430       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706028_b91ca60c-78f2-4d67-9f45-a34c10d662b4!
	I1123 10:10:31.886107       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6338f536-3941-4183-9bc9-75c073ed286e", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-706028_b91ca60c-78f2-4d67-9f45-a34c10d662b4 became leader
	I1123 10:10:31.986143       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706028_b91ca60c-78f2-4d67-9f45-a34c10d662b4!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-706028 -n old-k8s-version-706028
E1123 10:10:43.186964  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-706028 -n old-k8s-version-706028: exit status 2 (385.890655ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-706028 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-020224 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-020224 --alsologtostderr -v=1: exit status 80 (1.841398198s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-020224 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:11:52.872565  519945 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:11:52.872792  519945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:11:52.872823  519945 out.go:374] Setting ErrFile to fd 2...
	I1123 10:11:52.872845  519945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:11:52.873125  519945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:11:52.873397  519945 out.go:368] Setting JSON to false
	I1123 10:11:52.873538  519945 mustload.go:66] Loading cluster: no-preload-020224
	I1123 10:11:52.874007  519945 config.go:182] Loaded profile config "no-preload-020224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:11:52.874525  519945 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:11:52.899569  519945 host.go:66] Checking if "no-preload-020224" exists ...
	I1123 10:11:52.899885  519945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:11:52.963438  519945 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 10:11:52.952543045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:11:52.964079  519945 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-020224 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 10:11:52.967495  519945 out.go:179] * Pausing node no-preload-020224 ... 
	I1123 10:11:52.971125  519945 host.go:66] Checking if "no-preload-020224" exists ...
	I1123 10:11:52.971496  519945 ssh_runner.go:195] Run: systemctl --version
	I1123 10:11:52.971548  519945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:11:52.989320  519945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:11:53.096401  519945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:11:53.124315  519945 pause.go:52] kubelet running: true
	I1123 10:11:53.124388  519945 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:11:53.388610  519945 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:11:53.388690  519945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:11:53.462019  519945 cri.go:89] found id: "7f683b37fb2e222c6e33a53d3dd7bc514b2b5e218719c2446f59bd1db11e26f1"
	I1123 10:11:53.462044  519945 cri.go:89] found id: "b70888786109ff5bcd4b3c55c8ff29deccf75501effb0a21482fde850addde12"
	I1123 10:11:53.462049  519945 cri.go:89] found id: "68f6227f68631f93834013a157602cddcb5a711bae38e8f85120cd85c0718b34"
	I1123 10:11:53.462053  519945 cri.go:89] found id: "e17889e3bbe35de35cd8f26268b7a93a6ea26479b3e8840877416b118ac06f7c"
	I1123 10:11:53.462057  519945 cri.go:89] found id: "bafae4c509b34428ee8a90309affc818c464f230a656437d45944bad64ebec14"
	I1123 10:11:53.462060  519945 cri.go:89] found id: "fde673b61a03b720e2492ddf051014b494251142f14b9bcf92cb9b5416dc9304"
	I1123 10:11:53.462063  519945 cri.go:89] found id: "e20d0c00b09f6363ed0697d9006ccbda9d1b29842f0e683983474b226e898361"
	I1123 10:11:53.462066  519945 cri.go:89] found id: "cc22d1a213207a0fdc2938062c1f1f5505506d20a750b9890fa2b63926bbbfa7"
	I1123 10:11:53.462070  519945 cri.go:89] found id: "ec9f0e1b62e29a096907f8e55276c570c3b3ba64c77efeee2ee959d0dec0f641"
	I1123 10:11:53.462107  519945 cri.go:89] found id: "35353a7fdcee3b1c11f15830f6a44f93a5172d7e420261be0ec4df87bb885de5"
	I1123 10:11:53.462115  519945 cri.go:89] found id: "a1167a40e0a499dbfedc0b42aed6734a35d1bc25e255a90cfffe0a0e7023eb30"
	I1123 10:11:53.462119  519945 cri.go:89] found id: ""
	I1123 10:11:53.462178  519945 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:11:53.473607  519945 retry.go:31] will retry after 227.400717ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:11:53Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:11:53.702076  519945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:11:53.715315  519945 pause.go:52] kubelet running: false
	I1123 10:11:53.715404  519945 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:11:53.893402  519945 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:11:53.893601  519945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:11:53.977608  519945 cri.go:89] found id: "7f683b37fb2e222c6e33a53d3dd7bc514b2b5e218719c2446f59bd1db11e26f1"
	I1123 10:11:53.977634  519945 cri.go:89] found id: "b70888786109ff5bcd4b3c55c8ff29deccf75501effb0a21482fde850addde12"
	I1123 10:11:53.977639  519945 cri.go:89] found id: "68f6227f68631f93834013a157602cddcb5a711bae38e8f85120cd85c0718b34"
	I1123 10:11:53.977643  519945 cri.go:89] found id: "e17889e3bbe35de35cd8f26268b7a93a6ea26479b3e8840877416b118ac06f7c"
	I1123 10:11:53.977647  519945 cri.go:89] found id: "bafae4c509b34428ee8a90309affc818c464f230a656437d45944bad64ebec14"
	I1123 10:11:53.977650  519945 cri.go:89] found id: "fde673b61a03b720e2492ddf051014b494251142f14b9bcf92cb9b5416dc9304"
	I1123 10:11:53.977658  519945 cri.go:89] found id: "e20d0c00b09f6363ed0697d9006ccbda9d1b29842f0e683983474b226e898361"
	I1123 10:11:53.977661  519945 cri.go:89] found id: "cc22d1a213207a0fdc2938062c1f1f5505506d20a750b9890fa2b63926bbbfa7"
	I1123 10:11:53.977665  519945 cri.go:89] found id: "ec9f0e1b62e29a096907f8e55276c570c3b3ba64c77efeee2ee959d0dec0f641"
	I1123 10:11:53.977671  519945 cri.go:89] found id: "35353a7fdcee3b1c11f15830f6a44f93a5172d7e420261be0ec4df87bb885de5"
	I1123 10:11:53.977674  519945 cri.go:89] found id: "a1167a40e0a499dbfedc0b42aed6734a35d1bc25e255a90cfffe0a0e7023eb30"
	I1123 10:11:53.977677  519945 cri.go:89] found id: ""
	I1123 10:11:53.977723  519945 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:11:53.989238  519945 retry.go:31] will retry after 364.322221ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:11:53Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:11:54.353780  519945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:11:54.369756  519945 pause.go:52] kubelet running: false
	I1123 10:11:54.369839  519945 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:11:54.547193  519945 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:11:54.547284  519945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:11:54.622803  519945 cri.go:89] found id: "7f683b37fb2e222c6e33a53d3dd7bc514b2b5e218719c2446f59bd1db11e26f1"
	I1123 10:11:54.622823  519945 cri.go:89] found id: "b70888786109ff5bcd4b3c55c8ff29deccf75501effb0a21482fde850addde12"
	I1123 10:11:54.622828  519945 cri.go:89] found id: "68f6227f68631f93834013a157602cddcb5a711bae38e8f85120cd85c0718b34"
	I1123 10:11:54.622831  519945 cri.go:89] found id: "e17889e3bbe35de35cd8f26268b7a93a6ea26479b3e8840877416b118ac06f7c"
	I1123 10:11:54.622834  519945 cri.go:89] found id: "bafae4c509b34428ee8a90309affc818c464f230a656437d45944bad64ebec14"
	I1123 10:11:54.622838  519945 cri.go:89] found id: "fde673b61a03b720e2492ddf051014b494251142f14b9bcf92cb9b5416dc9304"
	I1123 10:11:54.622841  519945 cri.go:89] found id: "e20d0c00b09f6363ed0697d9006ccbda9d1b29842f0e683983474b226e898361"
	I1123 10:11:54.622845  519945 cri.go:89] found id: "cc22d1a213207a0fdc2938062c1f1f5505506d20a750b9890fa2b63926bbbfa7"
	I1123 10:11:54.622848  519945 cri.go:89] found id: "ec9f0e1b62e29a096907f8e55276c570c3b3ba64c77efeee2ee959d0dec0f641"
	I1123 10:11:54.622853  519945 cri.go:89] found id: "35353a7fdcee3b1c11f15830f6a44f93a5172d7e420261be0ec4df87bb885de5"
	I1123 10:11:54.622856  519945 cri.go:89] found id: "a1167a40e0a499dbfedc0b42aed6734a35d1bc25e255a90cfffe0a0e7023eb30"
	I1123 10:11:54.622860  519945 cri.go:89] found id: ""
	I1123 10:11:54.622909  519945 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:11:54.637771  519945 out.go:203] 
	W1123 10:11:54.640597  519945 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:11:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:11:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:11:54.640616  519945 out.go:285] * 
	* 
	W1123 10:11:54.647659  519945 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:11:54.650662  519945 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-020224 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-020224
helpers_test.go:243: (dbg) docker inspect no-preload-020224:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0",
	        "Created": "2025-11-23T10:09:02.634228682Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 514606,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:10:39.952872433Z",
	            "FinishedAt": "2025-11-23T10:10:38.856432398Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/hostname",
	        "HostsPath": "/var/lib/docker/containers/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/hosts",
	        "LogPath": "/var/lib/docker/containers/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0-json.log",
	        "Name": "/no-preload-020224",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-020224:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-020224",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0",
	                "LowerDir": "/var/lib/docker/overlay2/fa5d3a25bcb7f58c03a8da4f93eb6974e9507a851f3a34e8ca39457b619a17bf-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa5d3a25bcb7f58c03a8da4f93eb6974e9507a851f3a34e8ca39457b619a17bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa5d3a25bcb7f58c03a8da4f93eb6974e9507a851f3a34e8ca39457b619a17bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa5d3a25bcb7f58c03a8da4f93eb6974e9507a851f3a34e8ca39457b619a17bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-020224",
	                "Source": "/var/lib/docker/volumes/no-preload-020224/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-020224",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-020224",
	                "name.minikube.sigs.k8s.io": "no-preload-020224",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f1797686c00aa2c07fa10dc986715c7e7be8bdf0445b6bc8ff9185c84e2a1d11",
	            "SandboxKey": "/var/run/docker/netns/f1797686c00a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-020224": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:e4:39:30:cb:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5bdf554cce75de475d0aa700ed33b59629266aa02ea95fbb3579c79c5e0148ad",
	                    "EndpointID": "7cb3aa10bc93c76e648665a5884b53e6d7303f50384cca6c6d37b3dcaac34f6b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-020224",
	                        "18d5b0a18428"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020224 -n no-preload-020224
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020224 -n no-preload-020224: exit status 2 (393.279371ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-020224 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-020224 logs -n 25: (1.326567896s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p calico-507563 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo containerd config dump                                                                                                                                                                                                  │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo crio config                                                                                                                                                                                                             │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ delete  │ -p calico-507563                                                                                                                                                                                                                              │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	│ stop    │ -p old-k8s-version-706028 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-706028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p old-k8s-version-706028 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-020224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ stop    │ -p no-preload-020224 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ image   │ old-k8s-version-706028 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ pause   │ -p old-k8s-version-706028 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020224 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:11 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990     │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ image   │ no-preload-020224 image list --format=json                                                                                                                                                                                                    │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:11 UTC │
	│ pause   │ -p no-preload-020224 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:10:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:10:46.943623  516347 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:10:46.944095  516347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:46.944106  516347 out.go:374] Setting ErrFile to fd 2...
	I1123 10:10:46.944110  516347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:46.944378  516347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:10:46.944794  516347 out.go:368] Setting JSON to false
	I1123 10:10:46.945684  516347 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10396,"bootTime":1763882251,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:10:46.945763  516347 start.go:143] virtualization:  
	I1123 10:10:46.949103  516347 out.go:179] * [embed-certs-566990] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:10:46.952935  516347 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:10:46.953195  516347 notify.go:221] Checking for updates...
	I1123 10:10:46.958846  516347 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:10:46.961774  516347 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:10:46.964654  516347 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:10:46.967423  516347 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:10:46.970231  516347 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:10:46.973688  516347 config.go:182] Loaded profile config "no-preload-020224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:46.973843  516347 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:10:47.015038  516347 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:10:47.015216  516347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:10:47.080126  516347 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 10:10:47.070054735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:10:47.080230  516347 docker.go:319] overlay module found
	I1123 10:10:47.083788  516347 out.go:179] * Using the docker driver based on user configuration
	I1123 10:10:47.086759  516347 start.go:309] selected driver: docker
	I1123 10:10:47.086778  516347 start.go:927] validating driver "docker" against <nil>
	I1123 10:10:47.086792  516347 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:10:47.087462  516347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:10:47.171749  516347 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 10:10:47.156932254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:10:47.171905  516347 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:10:47.172118  516347 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:10:47.175026  516347 out.go:179] * Using Docker driver with root privileges
	I1123 10:10:47.177976  516347 cni.go:84] Creating CNI manager for ""
	I1123 10:10:47.178047  516347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:10:47.178055  516347 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:10:47.178135  516347 start.go:353] cluster config:
	{Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:10:47.181303  516347 out.go:179] * Starting "embed-certs-566990" primary control-plane node in "embed-certs-566990" cluster
	I1123 10:10:47.184256  516347 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:10:47.187222  516347 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:10:47.190038  516347 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:10:47.190091  516347 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:10:47.190100  516347 cache.go:65] Caching tarball of preloaded images
	I1123 10:10:47.190174  516347 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:10:47.190183  516347 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:10:47.190288  516347 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/config.json ...
	I1123 10:10:47.190306  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/config.json: {Name:mk9f0c217c2ecd7bc9f554d07a2532acdc5529fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:47.190456  516347 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:10:47.211788  516347 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:10:47.211808  516347 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:10:47.211823  516347 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:10:47.211852  516347 start.go:360] acquireMachinesLock for embed-certs-566990: {Name:mkc766faecda88b98c3d85f6aada2ef6121554c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:10:47.211956  516347 start.go:364] duration metric: took 88.797µs to acquireMachinesLock for "embed-certs-566990"
	I1123 10:10:47.211985  516347 start.go:93] Provisioning new machine with config: &{Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:10:47.212047  516347 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:10:44.617080  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:10:44.635943  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:10:44.653535  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:10:44.671259  514436 provision.go:87] duration metric: took 785.346327ms to configureAuth
	I1123 10:10:44.671335  514436 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:10:44.671563  514436 config.go:182] Loaded profile config "no-preload-020224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:44.671717  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:44.688691  514436 main.go:143] libmachine: Using SSH client type: native
	I1123 10:10:44.689010  514436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33476 <nil> <nil>}
	I1123 10:10:44.689024  514436 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:10:45.125004  514436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:10:45.125029  514436 machine.go:97] duration metric: took 4.8207253s to provisionDockerMachine
	I1123 10:10:45.125044  514436 start.go:293] postStartSetup for "no-preload-020224" (driver="docker")
	I1123 10:10:45.125055  514436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:10:45.125136  514436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:10:45.125188  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:45.148895  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:45.303904  514436 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:10:45.309213  514436 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:10:45.309307  514436 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:10:45.309334  514436 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:10:45.309471  514436 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:10:45.309625  514436 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:10:45.309797  514436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:10:45.321532  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:10:45.352168  514436 start.go:296] duration metric: took 227.108629ms for postStartSetup
	I1123 10:10:45.352597  514436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:10:45.352949  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:45.376291  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:45.523855  514436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:10:45.530265  514436 fix.go:56] duration metric: took 5.645271712s for fixHost
	I1123 10:10:45.530295  514436 start.go:83] releasing machines lock for "no-preload-020224", held for 5.645325193s
	I1123 10:10:45.530395  514436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-020224
	I1123 10:10:45.551737  514436 ssh_runner.go:195] Run: cat /version.json
	I1123 10:10:45.551788  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:45.552065  514436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:10:45.552133  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:45.583405  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:45.606973  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:45.705128  514436 ssh_runner.go:195] Run: systemctl --version
	I1123 10:10:45.812484  514436 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:10:45.852026  514436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:10:45.856782  514436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:10:45.856903  514436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:10:45.865720  514436 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:10:45.865785  514436 start.go:496] detecting cgroup driver to use...
	I1123 10:10:45.865834  514436 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:10:45.865911  514436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:10:45.882009  514436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:10:45.896593  514436 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:10:45.896699  514436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:10:45.912756  514436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:10:45.927649  514436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:10:46.074565  514436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:10:46.234048  514436 docker.go:234] disabling docker service ...
	I1123 10:10:46.234157  514436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:10:46.257722  514436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:10:46.291789  514436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:10:46.436230  514436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:10:46.591064  514436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:10:46.605618  514436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:10:46.623348  514436 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:10:46.623450  514436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.632716  514436 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:10:46.632807  514436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.642542  514436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.653124  514436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.663094  514436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:10:46.674854  514436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.683746  514436 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.692405  514436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.703419  514436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:10:46.711400  514436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:10:46.719058  514436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:10:46.864803  514436 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:10:47.086604  514436 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:10:47.086668  514436 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:10:47.091907  514436 start.go:564] Will wait 60s for crictl version
	I1123 10:10:47.091966  514436 ssh_runner.go:195] Run: which crictl
	I1123 10:10:47.096984  514436 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:10:47.143426  514436 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:10:47.143515  514436 ssh_runner.go:195] Run: crio --version
	I1123 10:10:47.183860  514436 ssh_runner.go:195] Run: crio --version
	I1123 10:10:47.235684  514436 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:10:47.238585  514436 cli_runner.go:164] Run: docker network inspect no-preload-020224 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:10:47.256453  514436 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 10:10:47.263095  514436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:10:47.275005  514436 kubeadm.go:884] updating cluster {Name:no-preload-020224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:10:47.275129  514436 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:10:47.275177  514436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:10:47.337659  514436 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:10:47.337679  514436 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:10:47.337687  514436 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 10:10:47.337781  514436 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-020224 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:10:47.337861  514436 ssh_runner.go:195] Run: crio config
	I1123 10:10:47.432640  514436 cni.go:84] Creating CNI manager for ""
	I1123 10:10:47.432664  514436 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:10:47.432679  514436 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:10:47.432702  514436 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-020224 NodeName:no-preload-020224 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:10:47.432832  514436 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-020224"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:10:47.432904  514436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:10:47.442488  514436 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:10:47.442573  514436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:10:47.455780  514436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:10:47.486191  514436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:10:47.508272  514436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 10:10:47.522369  514436 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:10:47.526158  514436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:10:47.537244  514436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:10:47.669646  514436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:10:47.693633  514436 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224 for IP: 192.168.85.2
	I1123 10:10:47.693651  514436 certs.go:195] generating shared ca certs ...
	I1123 10:10:47.693666  514436 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:47.693799  514436 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:10:47.693843  514436 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:10:47.693850  514436 certs.go:257] generating profile certs ...
	I1123 10:10:47.693928  514436 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.key
	I1123 10:10:47.693997  514436 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key.d87566b3
	I1123 10:10:47.694034  514436 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.key
	I1123 10:10:47.694137  514436 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:10:47.694166  514436 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:10:47.694174  514436 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:10:47.694200  514436 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:10:47.694225  514436 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:10:47.694248  514436 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:10:47.694321  514436 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:10:47.694936  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:10:47.722692  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:10:47.756799  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:10:47.788242  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:10:47.818128  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:10:47.843448  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:10:47.888147  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:10:47.967060  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:10:47.989147  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:10:48.025807  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:10:48.058047  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:10:48.087089  514436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:10:48.115476  514436 ssh_runner.go:195] Run: openssl version
	I1123 10:10:48.121938  514436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:10:48.132750  514436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:10:48.137665  514436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:10:48.137725  514436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:10:48.196309  514436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:10:48.220618  514436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:10:48.229014  514436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:10:48.232930  514436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:10:48.233053  514436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:10:48.277865  514436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:10:48.288957  514436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:10:48.297761  514436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:10:48.302227  514436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:10:48.302307  514436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:10:48.383276  514436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:10:48.409555  514436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:10:48.424766  514436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:10:48.544261  514436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:10:48.638209  514436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:10:48.710619  514436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:10:48.779780  514436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:10:48.854733  514436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:10:48.968489  514436 kubeadm.go:401] StartCluster: {Name:no-preload-020224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:10:48.968604  514436 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:10:48.968674  514436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:10:49.079770  514436 cri.go:89] found id: "fde673b61a03b720e2492ddf051014b494251142f14b9bcf92cb9b5416dc9304"
	I1123 10:10:49.079803  514436 cri.go:89] found id: "e20d0c00b09f6363ed0697d9006ccbda9d1b29842f0e683983474b226e898361"
	I1123 10:10:49.079808  514436 cri.go:89] found id: "cc22d1a213207a0fdc2938062c1f1f5505506d20a750b9890fa2b63926bbbfa7"
	I1123 10:10:49.079812  514436 cri.go:89] found id: "ec9f0e1b62e29a096907f8e55276c570c3b3ba64c77efeee2ee959d0dec0f641"
	I1123 10:10:49.079815  514436 cri.go:89] found id: ""
	I1123 10:10:49.079863  514436 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:10:49.108972  514436 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:49Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:10:49.109067  514436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:10:49.137797  514436 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:10:49.137829  514436 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:10:49.137880  514436 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:10:49.146502  514436 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:10:49.146940  514436 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-020224" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:10:49.148070  514436 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-020224" cluster setting kubeconfig missing "no-preload-020224" context setting]
	I1123 10:10:49.148405  514436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:49.150103  514436 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:10:49.162470  514436 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 10:10:49.162506  514436 kubeadm.go:602] duration metric: took 24.669678ms to restartPrimaryControlPlane
	I1123 10:10:49.162516  514436 kubeadm.go:403] duration metric: took 194.038688ms to StartCluster
	I1123 10:10:49.162543  514436 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:49.162618  514436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:10:49.163776  514436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:49.165208  514436 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:10:49.165480  514436 config.go:182] Loaded profile config "no-preload-020224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:49.165563  514436 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:10:49.165874  514436 addons.go:70] Setting storage-provisioner=true in profile "no-preload-020224"
	I1123 10:10:49.165894  514436 addons.go:239] Setting addon storage-provisioner=true in "no-preload-020224"
	W1123 10:10:49.165900  514436 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:10:49.165929  514436 host.go:66] Checking if "no-preload-020224" exists ...
	I1123 10:10:49.166377  514436 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:10:49.166546  514436 addons.go:70] Setting dashboard=true in profile "no-preload-020224"
	I1123 10:10:49.166577  514436 addons.go:239] Setting addon dashboard=true in "no-preload-020224"
	W1123 10:10:49.166584  514436 addons.go:248] addon dashboard should already be in state true
	I1123 10:10:49.166608  514436 host.go:66] Checking if "no-preload-020224" exists ...
	I1123 10:10:49.166887  514436 addons.go:70] Setting default-storageclass=true in profile "no-preload-020224"
	I1123 10:10:49.166904  514436 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-020224"
	I1123 10:10:49.167026  514436 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:10:49.167405  514436 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:10:49.177669  514436 out.go:179] * Verifying Kubernetes components...
	I1123 10:10:49.181016  514436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:10:49.223759  514436 addons.go:239] Setting addon default-storageclass=true in "no-preload-020224"
	W1123 10:10:49.223781  514436 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:10:49.223806  514436 host.go:66] Checking if "no-preload-020224" exists ...
	I1123 10:10:49.224227  514436 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:10:49.235610  514436 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:10:49.241658  514436 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:10:49.251488  514436 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:10:49.251571  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:10:49.251582  514436 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:10:49.251649  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:49.255528  514436 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:10:49.255555  514436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:10:49.255651  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:49.279905  514436 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:10:49.279929  514436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:10:49.279989  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:49.306700  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:49.324024  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:49.326240  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:47.215352  516347 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:10:47.215611  516347 start.go:159] libmachine.API.Create for "embed-certs-566990" (driver="docker")
	I1123 10:10:47.215659  516347 client.go:173] LocalClient.Create starting
	I1123 10:10:47.215733  516347 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem
	I1123 10:10:47.215768  516347 main.go:143] libmachine: Decoding PEM data...
	I1123 10:10:47.215788  516347 main.go:143] libmachine: Parsing certificate...
	I1123 10:10:47.215847  516347 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem
	I1123 10:10:47.215869  516347 main.go:143] libmachine: Decoding PEM data...
	I1123 10:10:47.215887  516347 main.go:143] libmachine: Parsing certificate...
	I1123 10:10:47.216287  516347 cli_runner.go:164] Run: docker network inspect embed-certs-566990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:10:47.233333  516347 cli_runner.go:211] docker network inspect embed-certs-566990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:10:47.233464  516347 network_create.go:284] running [docker network inspect embed-certs-566990] to gather additional debugging logs...
	I1123 10:10:47.233491  516347 cli_runner.go:164] Run: docker network inspect embed-certs-566990
	W1123 10:10:47.250946  516347 cli_runner.go:211] docker network inspect embed-certs-566990 returned with exit code 1
	I1123 10:10:47.250984  516347 network_create.go:287] error running [docker network inspect embed-certs-566990]: docker network inspect embed-certs-566990: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-566990 not found
	I1123 10:10:47.251052  516347 network_create.go:289] output of [docker network inspect embed-certs-566990]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-566990 not found
	
	** /stderr **
	I1123 10:10:47.251189  516347 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:10:47.278028  516347 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d56166f18c3a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:f2:0f:1a:18:9c} reservation:<nil>}
	I1123 10:10:47.278403  516347 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe6f7fd59576 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:8b:f7:8e:2b:59} reservation:<nil>}
	I1123 10:10:47.278654  516347 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c262e08021b1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:16:63:f0:32:b6} reservation:<nil>}
	I1123 10:10:47.279071  516347 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a9e50}
	I1123 10:10:47.279096  516347 network_create.go:124] attempt to create docker network embed-certs-566990 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 10:10:47.279153  516347 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-566990 embed-certs-566990
	I1123 10:10:47.353311  516347 network_create.go:108] docker network embed-certs-566990 192.168.76.0/24 created
	I1123 10:10:47.353345  516347 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-566990" container
	I1123 10:10:47.353521  516347 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:10:47.370842  516347 cli_runner.go:164] Run: docker volume create embed-certs-566990 --label name.minikube.sigs.k8s.io=embed-certs-566990 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:10:47.389735  516347 oci.go:103] Successfully created a docker volume embed-certs-566990
	I1123 10:10:47.389828  516347 cli_runner.go:164] Run: docker run --rm --name embed-certs-566990-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-566990 --entrypoint /usr/bin/test -v embed-certs-566990:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:10:48.006318  516347 oci.go:107] Successfully prepared a docker volume embed-certs-566990
	I1123 10:10:48.006398  516347 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:10:48.006409  516347 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:10:48.006483  516347 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-566990:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:10:49.631921  514436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:10:49.663125  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:10:49.663147  514436 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:10:49.667526  514436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:10:49.684895  514436 node_ready.go:35] waiting up to 6m0s for node "no-preload-020224" to be "Ready" ...
	I1123 10:10:49.707342  514436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:10:49.715217  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:10:49.715290  514436 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:10:49.776300  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:10:49.776383  514436 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:10:49.883164  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:10:49.883238  514436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:10:49.959486  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:10:49.959562  514436 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:10:50.043736  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:10:50.043812  514436 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:10:50.078917  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:10:50.078993  514436 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:10:50.117342  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:10:50.117432  514436 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:10:50.155429  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:10:50.155506  514436 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:10:50.189318  514436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:10:53.762090  516347 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-566990:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.75555957s)
	I1123 10:10:53.762128  516347 kic.go:203] duration metric: took 5.755706051s to extract preloaded images to volume ...
	W1123 10:10:53.762260  516347 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 10:10:53.762363  516347 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:10:53.854067  516347 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-566990 --name embed-certs-566990 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-566990 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-566990 --network embed-certs-566990 --ip 192.168.76.2 --volume embed-certs-566990:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:10:54.272102  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Running}}
	I1123 10:10:54.302309  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:10:54.341098  516347 cli_runner.go:164] Run: docker exec embed-certs-566990 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:10:54.411187  516347 oci.go:144] the created container "embed-certs-566990" has a running status.
	I1123 10:10:54.411215  516347 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa...
	I1123 10:10:55.051678  516347 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:10:55.075515  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:10:55.107357  516347 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:10:55.107376  516347 kic_runner.go:114] Args: [docker exec --privileged embed-certs-566990 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:10:55.225469  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:10:55.252832  516347 machine.go:94] provisionDockerMachine start ...
	I1123 10:10:55.252941  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:55.277519  516347 main.go:143] libmachine: Using SSH client type: native
	I1123 10:10:55.277854  516347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33481 <nil> <nil>}
	I1123 10:10:55.277870  516347 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:10:55.278549  516347 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:10:55.553037  514436 node_ready.go:49] node "no-preload-020224" is "Ready"
	I1123 10:10:55.553069  514436 node_ready.go:38] duration metric: took 5.868098777s for node "no-preload-020224" to be "Ready" ...
	I1123 10:10:55.553084  514436 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:10:55.553145  514436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:10:57.506904  514436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.839346468s)
	I1123 10:10:57.506967  514436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.799554893s)
	I1123 10:10:57.507219  514436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.317822949s)
	I1123 10:10:57.507436  514436 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.954274127s)
	I1123 10:10:57.507485  514436 api_server.go:72] duration metric: took 8.341930566s to wait for apiserver process to appear ...
	I1123 10:10:57.507509  514436 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:10:57.507540  514436 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:10:57.510436  514436 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-020224 addons enable metrics-server
	
	I1123 10:10:57.515548  514436 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:10:57.516886  514436 api_server.go:141] control plane version: v1.34.1
	I1123 10:10:57.516907  514436 api_server.go:131] duration metric: took 9.378292ms to wait for apiserver health ...
	I1123 10:10:57.516915  514436 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:10:57.521608  514436 system_pods.go:59] 8 kube-system pods found
	I1123 10:10:57.521650  514436 system_pods.go:61] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:57.521661  514436 system_pods.go:61] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:10:57.521668  514436 system_pods.go:61] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:57.521675  514436 system_pods.go:61] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:10:57.521681  514436 system_pods.go:61] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:10:57.521694  514436 system_pods.go:61] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:57.521700  514436 system_pods.go:61] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:10:57.521704  514436 system_pods.go:61] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Running
	I1123 10:10:57.521710  514436 system_pods.go:74] duration metric: took 4.789177ms to wait for pod list to return data ...
	I1123 10:10:57.521721  514436 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:10:57.522362  514436 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 10:10:57.524681  514436 default_sa.go:45] found service account: "default"
	I1123 10:10:57.524707  514436 default_sa.go:55] duration metric: took 2.979578ms for default service account to be created ...
	I1123 10:10:57.524722  514436 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:10:57.525200  514436 addons.go:530] duration metric: took 8.359637942s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:10:57.527562  514436 system_pods.go:86] 8 kube-system pods found
	I1123 10:10:57.527597  514436 system_pods.go:89] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:57.527607  514436 system_pods.go:89] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:10:57.527613  514436 system_pods.go:89] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:57.527624  514436 system_pods.go:89] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:10:57.527633  514436 system_pods.go:89] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:10:57.527641  514436 system_pods.go:89] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:57.527659  514436 system_pods.go:89] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:10:57.527674  514436 system_pods.go:89] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Running
	I1123 10:10:57.527683  514436 system_pods.go:126] duration metric: took 2.95411ms to wait for k8s-apps to be running ...
	I1123 10:10:57.527693  514436 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:10:57.527749  514436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:10:57.541369  514436 system_svc.go:56] duration metric: took 13.66648ms WaitForService to wait for kubelet
	I1123 10:10:57.541399  514436 kubeadm.go:587] duration metric: took 8.375860001s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:10:57.541447  514436 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:10:57.544326  514436 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:10:57.544356  514436 node_conditions.go:123] node cpu capacity is 2
	I1123 10:10:57.544369  514436 node_conditions.go:105] duration metric: took 2.916677ms to run NodePressure ...
	I1123 10:10:57.544382  514436 start.go:242] waiting for startup goroutines ...
	I1123 10:10:57.544390  514436 start.go:247] waiting for cluster config update ...
	I1123 10:10:57.544401  514436 start.go:256] writing updated cluster config ...
	I1123 10:10:57.544686  514436 ssh_runner.go:195] Run: rm -f paused
	I1123 10:10:57.548536  514436 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:10:57.551954  514436 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v59bz" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:10:59.562369  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:10:58.433253  516347 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-566990
	
	I1123 10:10:58.433279  516347 ubuntu.go:182] provisioning hostname "embed-certs-566990"
	I1123 10:10:58.433369  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:58.451408  516347 main.go:143] libmachine: Using SSH client type: native
	I1123 10:10:58.451740  516347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33481 <nil> <nil>}
	I1123 10:10:58.451755  516347 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-566990 && echo "embed-certs-566990" | sudo tee /etc/hostname
	I1123 10:10:58.618144  516347 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-566990
	
	I1123 10:10:58.618243  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:58.636752  516347 main.go:143] libmachine: Using SSH client type: native
	I1123 10:10:58.637113  516347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33481 <nil> <nil>}
	I1123 10:10:58.637136  516347 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-566990' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-566990/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-566990' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:10:58.789683  516347 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:10:58.789706  516347 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:10:58.789726  516347 ubuntu.go:190] setting up certificates
	I1123 10:10:58.789736  516347 provision.go:84] configureAuth start
	I1123 10:10:58.789808  516347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:10:58.807140  516347 provision.go:143] copyHostCerts
	I1123 10:10:58.807205  516347 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:10:58.807214  516347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:10:58.807289  516347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:10:58.807383  516347 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:10:58.807388  516347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:10:58.807415  516347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:10:58.807464  516347 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:10:58.807468  516347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:10:58.807493  516347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:10:58.807536  516347 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.embed-certs-566990 san=[127.0.0.1 192.168.76.2 embed-certs-566990 localhost minikube]
	I1123 10:10:59.148898  516347 provision.go:177] copyRemoteCerts
	I1123 10:10:59.148974  516347 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:10:59.149030  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:59.167355  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:10:59.274720  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 10:10:59.293984  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:10:59.320633  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:10:59.339630  516347 provision.go:87] duration metric: took 549.87034ms to configureAuth
	I1123 10:10:59.339661  516347 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:10:59.339850  516347 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:59.339959  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:59.367088  516347 main.go:143] libmachine: Using SSH client type: native
	I1123 10:10:59.367405  516347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33481 <nil> <nil>}
	I1123 10:10:59.367430  516347 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:10:59.737195  516347 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:10:59.737270  516347 machine.go:97] duration metric: took 4.484402073s to provisionDockerMachine
	I1123 10:10:59.737297  516347 client.go:176] duration metric: took 12.521626424s to LocalClient.Create
	I1123 10:10:59.737354  516347 start.go:167] duration metric: took 12.521743357s to libmachine.API.Create "embed-certs-566990"
	I1123 10:10:59.737381  516347 start.go:293] postStartSetup for "embed-certs-566990" (driver="docker")
	I1123 10:10:59.737436  516347 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:10:59.737536  516347 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:10:59.737610  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:59.764147  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:10:59.886438  516347 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:10:59.891709  516347 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:10:59.891736  516347 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:10:59.891747  516347 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:10:59.891803  516347 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:10:59.891903  516347 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:10:59.892017  516347 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:10:59.902530  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:10:59.924741  516347 start.go:296] duration metric: took 187.32662ms for postStartSetup
	I1123 10:10:59.925218  516347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:10:59.952562  516347 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/config.json ...
	I1123 10:10:59.952830  516347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:10:59.952874  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:59.984026  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:11:00.151543  516347 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:11:00.160148  516347 start.go:128] duration metric: took 12.948086274s to createHost
	I1123 10:11:00.160177  516347 start.go:83] releasing machines lock for "embed-certs-566990", held for 12.94821266s
	I1123 10:11:00.160284  516347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:11:00.204990  516347 ssh_runner.go:195] Run: cat /version.json
	I1123 10:11:00.205055  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:11:00.205528  516347 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:11:00.205604  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:11:00.294182  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:11:00.312033  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:11:00.527782  516347 ssh_runner.go:195] Run: systemctl --version
	I1123 10:11:00.534488  516347 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:11:00.575240  516347 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:11:00.579625  516347 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:11:00.579724  516347 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:11:00.608651  516347 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 10:11:00.608673  516347 start.go:496] detecting cgroup driver to use...
	I1123 10:11:00.608704  516347 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:11:00.608759  516347 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:11:00.627252  516347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:11:00.642942  516347 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:11:00.643004  516347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:11:00.661493  516347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:11:00.683610  516347 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:11:00.860965  516347 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:11:01.046572  516347 docker.go:234] disabling docker service ...
	I1123 10:11:01.046638  516347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:11:01.082693  516347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:11:01.102691  516347 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:11:01.285967  516347 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:11:01.446585  516347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:11:01.468223  516347 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:11:01.484073  516347 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:11:01.484207  516347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.494836  516347 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:11:01.494976  516347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.504335  516347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.515110  516347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.524788  516347 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:11:01.533869  516347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.543411  516347 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.560432  516347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.570291  516347 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:11:01.579698  516347 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:11:01.588439  516347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:11:01.738002  516347 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:11:01.981137  516347 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:11:01.981261  516347 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:11:01.985871  516347 start.go:564] Will wait 60s for crictl version
	I1123 10:11:01.986019  516347 ssh_runner.go:195] Run: which crictl
	I1123 10:11:01.990305  516347 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:11:02.042519  516347 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:11:02.042682  516347 ssh_runner.go:195] Run: crio --version
	I1123 10:11:02.083213  516347 ssh_runner.go:195] Run: crio --version
	I1123 10:11:02.146013  516347 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:11:02.149054  516347 cli_runner.go:164] Run: docker network inspect embed-certs-566990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:11:02.177744  516347 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:11:02.182049  516347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:11:02.194552  516347 kubeadm.go:884] updating cluster {Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:11:02.194679  516347 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:11:02.194732  516347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:11:02.251925  516347 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:11:02.251944  516347 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:11:02.252002  516347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:11:02.281436  516347 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:11:02.281459  516347 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:11:02.281467  516347 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:11:02.281562  516347 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-566990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:11:02.281650  516347 ssh_runner.go:195] Run: crio config
	I1123 10:11:02.356539  516347 cni.go:84] Creating CNI manager for ""
	I1123 10:11:02.356562  516347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:11:02.356585  516347 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:11:02.356608  516347 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-566990 NodeName:embed-certs-566990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:11:02.356748  516347 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-566990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:11:02.356820  516347 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:11:02.365758  516347 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:11:02.365840  516347 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:11:02.374550  516347 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 10:11:02.388345  516347 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:11:02.402672  516347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1123 10:11:02.417283  516347 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:11:02.421094  516347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:11:02.431716  516347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:11:02.588268  516347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:11:02.607044  516347 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990 for IP: 192.168.76.2
	I1123 10:11:02.607067  516347 certs.go:195] generating shared ca certs ...
	I1123 10:11:02.607083  516347 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:02.607222  516347 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:11:02.607273  516347 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:11:02.607282  516347 certs.go:257] generating profile certs ...
	I1123 10:11:02.607342  516347 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.key
	I1123 10:11:02.607359  516347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.crt with IP's: []
	I1123 10:11:03.186151  516347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.crt ...
	I1123 10:11:03.186230  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.crt: {Name:mk310c5f03a9a0317bf7e4490391f5f9334d4c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:03.186471  516347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.key ...
	I1123 10:11:03.186506  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.key: {Name:mkafc12f332c48c6902b0e78ec546ce7c7aab6fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:03.186661  516347 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key.e8338b8a
	I1123 10:11:03.186701  516347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt.e8338b8a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 10:11:03.236918  516347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt.e8338b8a ...
	I1123 10:11:03.236984  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt.e8338b8a: {Name:mkd733a72b4ba50b720215823b349a40bab4c1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:03.237202  516347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key.e8338b8a ...
	I1123 10:11:03.237238  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key.e8338b8a: {Name:mkd79d1a0674af1e548a4eca5efb393ca1ee4981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:03.237456  516347 certs.go:382] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt.e8338b8a -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt
	I1123 10:11:03.237592  516347 certs.go:386] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key.e8338b8a -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key
	I1123 10:11:03.237680  516347 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key
	I1123 10:11:03.237729  516347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.crt with IP's: []
	I1123 10:11:03.572199  516347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.crt ...
	I1123 10:11:03.572271  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.crt: {Name:mk9c74dec48e7a852b7547fb65a91236b1e1122b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:03.572485  516347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key ...
	I1123 10:11:03.572522  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key: {Name:mk939ced4d30b6a615e349eb4c52a44b92624537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:03.572792  516347 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:11:03.573289  516347 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:11:03.573337  516347 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:11:03.573445  516347 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:11:03.573502  516347 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:11:03.573560  516347 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:11:03.573637  516347 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:11:03.574225  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:11:03.596182  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:11:03.636585  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:11:03.668332  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:11:03.707801  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:11:03.726710  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:11:03.746178  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:11:03.767027  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:11:03.787345  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:11:03.808072  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:11:03.827724  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:11:03.848594  516347 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:11:03.862747  516347 ssh_runner.go:195] Run: openssl version
	I1123 10:11:03.870204  516347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:11:03.879386  516347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:11:03.883538  516347 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:11:03.883685  516347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:11:03.937809  516347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:11:03.949103  516347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:11:03.958680  516347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:11:03.962778  516347 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:11:03.962890  516347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:11:04.008107  516347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:11:04.017974  516347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:11:04.027359  516347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:11:04.031901  516347 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:11:04.032020  516347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:11:04.080975  516347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:11:04.090398  516347 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:11:04.095596  516347 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:11:04.095652  516347 kubeadm.go:401] StartCluster: {Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:11:04.095730  516347 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:11:04.095787  516347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:11:04.133496  516347 cri.go:89] found id: ""
	I1123 10:11:04.133565  516347 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:11:04.144867  516347 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:11:04.155322  516347 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:11:04.155400  516347 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:11:04.167167  516347 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:11:04.167189  516347 kubeadm.go:158] found existing configuration files:
	
	I1123 10:11:04.167252  516347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:11:04.178621  516347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:11:04.178724  516347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:11:04.187132  516347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:11:04.196972  516347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:11:04.197052  516347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:11:04.205516  516347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:11:04.215287  516347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:11:04.215361  516347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:11:04.224889  516347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:11:04.234735  516347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:11:04.234801  516347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:11:04.243401  516347 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:11:04.306192  516347 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:11:04.306254  516347 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:11:04.344065  516347 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:11:04.344148  516347 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:11:04.344189  516347 kubeadm.go:319] OS: Linux
	I1123 10:11:04.344239  516347 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:11:04.344292  516347 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:11:04.344351  516347 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:11:04.344450  516347 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:11:04.344503  516347 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:11:04.344568  516347 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:11:04.344620  516347 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:11:04.344672  516347 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:11:04.344724  516347 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:11:04.490021  516347 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:11:04.490145  516347 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:11:04.490238  516347 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:11:04.499733  516347 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1123 10:11:02.059499  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:04.066773  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:04.507123  516347 out.go:252]   - Generating certificates and keys ...
	I1123 10:11:04.507229  516347 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:11:04.507296  516347 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:11:04.613985  516347 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:11:05.289379  516347 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:11:05.609642  516347 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:11:06.485212  516347 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:11:06.648126  516347 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:11:06.648687  516347 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-566990 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1123 10:11:06.558799  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:08.560116  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:07.227298  516347 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:11:07.227940  516347 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-566990 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 10:11:07.813109  516347 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:11:08.144972  516347 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:11:09.069955  516347 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:11:09.070500  516347 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:11:09.140499  516347 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:11:09.511527  516347 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:11:10.993228  516347 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:11:11.288662  516347 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:11:11.418747  516347 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:11:11.419844  516347 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:11:11.433896  516347 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:11:11.440316  516347 out.go:252]   - Booting up control plane ...
	I1123 10:11:11.440424  516347 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:11:11.440501  516347 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:11:11.440563  516347 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:11:11.473174  516347 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:11:11.473288  516347 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:11:11.488195  516347 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:11:11.488649  516347 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:11:11.488851  516347 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:11:11.649540  516347 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:11:11.649663  516347 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1123 10:11:10.561120  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:13.057399  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:13.150236  516347 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501154668s
	I1123 10:11:13.154218  516347 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:11:13.154317  516347 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 10:11:13.154414  516347 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:11:13.154491  516347 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:11:15.200605  516347 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.045337607s
	W1123 10:11:15.059261  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:17.558309  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:19.558990  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:17.724883  516347 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.570624719s
	I1123 10:11:19.655954  516347 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501628135s
	I1123 10:11:19.681165  516347 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:11:19.714535  516347 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:11:19.750668  516347 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:11:19.750872  516347 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-566990 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:11:19.799973  516347 kubeadm.go:319] [bootstrap-token] Using token: zpd6zu.4cg9pp8coqg7svyt
	I1123 10:11:19.805666  516347 out.go:252]   - Configuring RBAC rules ...
	I1123 10:11:19.805795  516347 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:11:19.813014  516347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:11:19.836820  516347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:11:19.842790  516347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:11:19.848530  516347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:11:19.861743  516347 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:11:20.063478  516347 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:11:20.506946  516347 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:11:21.062962  516347 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:11:21.064275  516347 kubeadm.go:319] 
	I1123 10:11:21.064343  516347 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:11:21.064350  516347 kubeadm.go:319] 
	I1123 10:11:21.064422  516347 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:11:21.064426  516347 kubeadm.go:319] 
	I1123 10:11:21.064449  516347 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:11:21.064505  516347 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:11:21.064552  516347 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:11:21.064557  516347 kubeadm.go:319] 
	I1123 10:11:21.064607  516347 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:11:21.064611  516347 kubeadm.go:319] 
	I1123 10:11:21.064655  516347 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:11:21.064659  516347 kubeadm.go:319] 
	I1123 10:11:21.064707  516347 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:11:21.064778  516347 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:11:21.064843  516347 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:11:21.064847  516347 kubeadm.go:319] 
	I1123 10:11:21.064926  516347 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:11:21.065002  516347 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:11:21.065037  516347 kubeadm.go:319] 
	I1123 10:11:21.065116  516347 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zpd6zu.4cg9pp8coqg7svyt \
	I1123 10:11:21.065212  516347 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e \
	I1123 10:11:21.065231  516347 kubeadm.go:319] 	--control-plane 
	I1123 10:11:21.065235  516347 kubeadm.go:319] 
	I1123 10:11:21.065315  516347 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:11:21.065319  516347 kubeadm.go:319] 
	I1123 10:11:21.065396  516347 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zpd6zu.4cg9pp8coqg7svyt \
	I1123 10:11:21.065539  516347 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e 
	I1123 10:11:21.069973  516347 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 10:11:21.070209  516347 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 10:11:21.070315  516347 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:11:21.070398  516347 cni.go:84] Creating CNI manager for ""
	I1123 10:11:21.070422  516347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:11:21.075548  516347 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:11:21.078712  516347 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:11:21.082769  516347 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:11:21.082792  516347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:11:21.103006  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:11:21.437803  516347 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:11:21.437868  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:21.437931  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-566990 minikube.k8s.io/updated_at=2025_11_23T10_11_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=embed-certs-566990 minikube.k8s.io/primary=true
	I1123 10:11:21.604086  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:21.604164  516347 ops.go:34] apiserver oom_adj: -16
	W1123 10:11:22.058188  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:24.557706  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:22.104394  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:22.604220  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:23.104256  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:23.604257  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:24.104111  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:24.604193  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:25.104531  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:25.604155  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:25.743011  516347 kubeadm.go:1114] duration metric: took 4.305209556s to wait for elevateKubeSystemPrivileges
	I1123 10:11:25.743040  516347 kubeadm.go:403] duration metric: took 21.647391653s to StartCluster
	I1123 10:11:25.743058  516347 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:25.743120  516347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:11:25.744547  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:25.744765  516347 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:11:25.744844  516347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:11:25.745070  516347 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:11:25.745101  516347 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:11:25.745156  516347 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-566990"
	I1123 10:11:25.745169  516347 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-566990"
	I1123 10:11:25.745189  516347 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:11:25.745930  516347 addons.go:70] Setting default-storageclass=true in profile "embed-certs-566990"
	I1123 10:11:25.745956  516347 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-566990"
	I1123 10:11:25.746025  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:11:25.746250  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:11:25.749706  516347 out.go:179] * Verifying Kubernetes components...
	I1123 10:11:25.757891  516347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:11:25.782262  516347 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:11:25.789653  516347 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:11:25.789676  516347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:11:25.789740  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:11:25.791439  516347 addons.go:239] Setting addon default-storageclass=true in "embed-certs-566990"
	I1123 10:11:25.791487  516347 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:11:25.791921  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:11:25.828953  516347 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:11:25.828973  516347 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:11:25.829043  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:11:25.843158  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:11:25.867977  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:11:26.121551  516347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:11:26.171044  516347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:11:26.171201  516347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:11:26.204251  516347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:11:26.911303  516347 node_ready.go:35] waiting up to 6m0s for node "embed-certs-566990" to be "Ready" ...
	I1123 10:11:26.911691  516347 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 10:11:26.954570  516347 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 10:11:26.558809  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:29.057702  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:26.958699  516347 addons.go:530] duration metric: took 1.21358845s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:11:27.416197  516347 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-566990" context rescaled to 1 replicas
	W1123 10:11:28.914292  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:30.914682  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:31.558229  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:34.057435  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:33.414717  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:35.914248  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:36.057663  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:38.557764  514436 pod_ready.go:94] pod "coredns-66bc5c9577-v59bz" is "Ready"
	I1123 10:11:38.557796  514436 pod_ready.go:86] duration metric: took 41.005814081s for pod "coredns-66bc5c9577-v59bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.560510  514436 pod_ready.go:83] waiting for pod "etcd-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.565230  514436 pod_ready.go:94] pod "etcd-no-preload-020224" is "Ready"
	I1123 10:11:38.565260  514436 pod_ready.go:86] duration metric: took 4.728357ms for pod "etcd-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.568295  514436 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.573206  514436 pod_ready.go:94] pod "kube-apiserver-no-preload-020224" is "Ready"
	I1123 10:11:38.573235  514436 pod_ready.go:86] duration metric: took 4.912361ms for pod "kube-apiserver-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.575569  514436 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.756514  514436 pod_ready.go:94] pod "kube-controller-manager-no-preload-020224" is "Ready"
	I1123 10:11:38.756542  514436 pod_ready.go:86] duration metric: took 180.943339ms for pod "kube-controller-manager-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.956485  514436 pod_ready.go:83] waiting for pod "kube-proxy-7s6pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:39.356167  514436 pod_ready.go:94] pod "kube-proxy-7s6pf" is "Ready"
	I1123 10:11:39.356192  514436 pod_ready.go:86] duration metric: took 399.68094ms for pod "kube-proxy-7s6pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:39.555948  514436 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:39.956235  514436 pod_ready.go:94] pod "kube-scheduler-no-preload-020224" is "Ready"
	I1123 10:11:39.956306  514436 pod_ready.go:86] duration metric: took 400.328671ms for pod "kube-scheduler-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:39.956339  514436 pod_ready.go:40] duration metric: took 42.407757118s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:11:40.023535  514436 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:11:40.026595  514436 out.go:179] * Done! kubectl is now configured to use "no-preload-020224" cluster and "default" namespace by default
	W1123 10:11:38.414203  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:40.420494  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:42.914931  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:45.414536  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:47.414639  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:49.915221  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.910360542Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ecd6e3ae-be8c-466e-9150-f860a4354e09 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.919698388Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3d9dc970-58dc-4754-b65b-b470a89a9402 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.922954369Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr/dashboard-metrics-scraper" id=4460bc5a-ce9d-48af-a5c1-38cd194f6e3d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.923085973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.930327308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.932924283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.952120706Z" level=info msg="Created container 35353a7fdcee3b1c11f15830f6a44f93a5172d7e420261be0ec4df87bb885de5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr/dashboard-metrics-scraper" id=4460bc5a-ce9d-48af-a5c1-38cd194f6e3d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.953273751Z" level=info msg="Starting container: 35353a7fdcee3b1c11f15830f6a44f93a5172d7e420261be0ec4df87bb885de5" id=c607c7f6-907d-4c39-af0e-bd85aae39eb3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.957794203Z" level=info msg="Started container" PID=1638 containerID=35353a7fdcee3b1c11f15830f6a44f93a5172d7e420261be0ec4df87bb885de5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr/dashboard-metrics-scraper id=c607c7f6-907d-4c39-af0e-bd85aae39eb3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7fa17977110b5b37793311c20bd86ce823f50ad184358b8152f76df229afe738
	Nov 23 10:11:34 no-preload-020224 conmon[1636]: conmon 35353a7fdcee3b1c11f1 <ninfo>: container 1638 exited with status 1
	Nov 23 10:11:35 no-preload-020224 crio[655]: time="2025-11-23T10:11:35.26703613Z" level=info msg="Removing container: 413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae" id=66925551-8142-4507-8cda-a7747be5e643 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:11:35 no-preload-020224 crio[655]: time="2025-11-23T10:11:35.278861664Z" level=info msg="Error loading conmon cgroup of container 413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae: cgroup deleted" id=66925551-8142-4507-8cda-a7747be5e643 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:11:35 no-preload-020224 crio[655]: time="2025-11-23T10:11:35.289659736Z" level=info msg="Removed container 413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr/dashboard-metrics-scraper" id=66925551-8142-4507-8cda-a7747be5e643 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.163825138Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.171060418Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.171096217Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.171121374Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.174358409Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.17455298Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.174665557Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.177616638Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.177647875Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.177671596Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.181332359Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.18136614Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	35353a7fdcee3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   7fa17977110b5       dashboard-metrics-scraper-6ffb444bf9-j5kdr   kubernetes-dashboard
	7f683b37fb2e2       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           28 seconds ago       Running             storage-provisioner         2                   d682d5b88afbc       storage-provisioner                          kube-system
	a1167a40e0a49       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   27d51be8520fe       kubernetes-dashboard-855c9754f9-n54fr        kubernetes-dashboard
	b70888786109f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           59 seconds ago       Running             coredns                     1                   166e37624eae1       coredns-66bc5c9577-v59bz                     kube-system
	61ca8c529c76f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   2ea1e4656660b       busybox                                      default
	68f6227f68631       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   504da6166ae02       kube-proxy-7s6pf                             kube-system
	e17889e3bbe35       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           59 seconds ago       Exited              storage-provisioner         1                   d682d5b88afbc       storage-provisioner                          kube-system
	bafae4c509b34       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   a8364799eb611       kindnet-ghq9t                                kube-system
	fde673b61a03b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   f7226d8aa1d36       kube-apiserver-no-preload-020224             kube-system
	e20d0c00b09f6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6287c82f233a1       kube-controller-manager-no-preload-020224    kube-system
	cc22d1a213207       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   49be46b129576       kube-scheduler-no-preload-020224             kube-system
	ec9f0e1b62e29       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   9f6cca0e95669       etcd-no-preload-020224                       kube-system
	
	
	==> coredns [b70888786109ff5bcd4b3c55c8ff29deccf75501effb0a21482fde850addde12] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57967 - 48206 "HINFO IN 7540399292455442044.878110780711058301. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023046494s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-020224
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-020224
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=no-preload-020224
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_09_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:09:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-020224
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:11:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:11:26 +0000   Sun, 23 Nov 2025 10:09:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:11:26 +0000   Sun, 23 Nov 2025 10:09:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:11:26 +0000   Sun, 23 Nov 2025 10:09:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:11:26 +0000   Sun, 23 Nov 2025 10:10:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-020224
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                57e370ae-7663-48e3-a7c6-52885f59b718
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-v59bz                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m
	  kube-system                 etcd-no-preload-020224                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m6s
	  kube-system                 kindnet-ghq9t                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m
	  kube-system                 kube-apiserver-no-preload-020224              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-no-preload-020224     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-7s6pf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-scheduler-no-preload-020224              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-j5kdr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-n54fr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 118s                   kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node no-preload-020224 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node no-preload-020224 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m16s (x8 over 2m16s)  kubelet          Node no-preload-020224 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m6s                   kubelet          Node no-preload-020224 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m6s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m6s                   kubelet          Node no-preload-020224 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m6s                   kubelet          Node no-preload-020224 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m1s                   node-controller  Node no-preload-020224 event: Registered Node no-preload-020224 in Controller
	  Normal   NodeReady                104s                   kubelet          Node no-preload-020224 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 68s)      kubelet          Node no-preload-020224 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 68s)      kubelet          Node no-preload-020224 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 68s)      kubelet          Node no-preload-020224 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node no-preload-020224 event: Registered Node no-preload-020224 in Controller
	
	
	==> dmesg <==
	[Nov23 09:47] overlayfs: idmapped layers are currently not supported
	[ +12.563591] hrtimer: interrupt took 4093727 ns
	[ +14.190024] overlayfs: idmapped layers are currently not supported
	[Nov23 09:49] overlayfs: idmapped layers are currently not supported
	[Nov23 09:50] overlayfs: idmapped layers are currently not supported
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ec9f0e1b62e29a096907f8e55276c570c3b3ba64c77efeee2ee959d0dec0f641] <==
	{"level":"warn","ts":"2025-11-23T10:10:53.573607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.590293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.603920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.620147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.634902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.649731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.664870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.680367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.723653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.732927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.757037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.774625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.794426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.821461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.842307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.858682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.877657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.900432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.942242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.956737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.983955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.990142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:54.024886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:54.071408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:54.120476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53538","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:11:56 up  2:54,  0 user,  load average: 3.86, 4.40, 3.45
	Linux no-preload-020224 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bafae4c509b34428ee8a90309affc818c464f230a656437d45944bad64ebec14] <==
	I1123 10:10:56.948131       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:10:56.957788       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 10:10:56.957942       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:10:56.957955       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:10:56.957970       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:10:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:10:57.163006       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:10:57.163043       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:10:57.163052       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:10:57.163682       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:11:27.163096       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 10:11:27.163361       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:11:27.163470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 10:11:27.164086       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 10:11:28.763423       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:11:28.763460       1 metrics.go:72] Registering metrics
	I1123 10:11:28.763530       1 controller.go:711] "Syncing nftables rules"
	I1123 10:11:37.163517       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:11:37.163567       1 main.go:301] handling current node
	I1123 10:11:47.162625       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:11:47.162660       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fde673b61a03b720e2492ddf051014b494251142f14b9bcf92cb9b5416dc9304] <==
	I1123 10:10:55.657998       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 10:10:55.661587       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:10:55.671100       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 10:10:55.671173       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 10:10:55.671189       1 policy_source.go:240] refreshing policies
	I1123 10:10:55.671298       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:10:55.671308       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:10:55.671314       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:10:55.671320       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:10:55.706174       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:10:55.712704       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 10:10:55.712771       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:10:55.712837       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1123 10:10:55.770618       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 10:10:55.981967       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:10:56.417682       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:10:57.123124       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:10:57.228667       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:10:57.267635       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:10:57.283734       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:10:57.362329       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.105.87"}
	I1123 10:10:57.381807       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.19.184"}
	I1123 10:10:59.467817       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:10:59.521093       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:10:59.811725       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e20d0c00b09f6363ed0697d9006ccbda9d1b29842f0e683983474b226e898361] <==
	I1123 10:10:59.218262       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:10:59.218596       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-020224"
	I1123 10:10:59.218729       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 10:10:59.236291       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:10:59.240418       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:10:59.260317       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:10:59.260595       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:10:59.260670       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:10:59.260727       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 10:10:59.260785       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:10:59.260861       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:10:59.263230       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:10:59.263349       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:10:59.264057       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:10:59.264318       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:10:59.264332       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 10:10:59.264344       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:10:59.271171       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:10:59.274568       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:10:59.281810       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:10:59.282527       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:10:59.284215       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:10:59.295016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:10:59.842632       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1123 10:10:59.842824       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [68f6227f68631f93834013a157602cddcb5a711bae38e8f85120cd85c0718b34] <==
	I1123 10:10:57.180889       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:10:57.311266       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:10:57.413956       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:10:57.414008       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 10:10:57.414094       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:10:57.443332       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:10:57.443451       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:10:57.447699       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:10:57.448092       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:10:57.448311       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:10:57.449876       1 config.go:200] "Starting service config controller"
	I1123 10:10:57.449944       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:10:57.449987       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:10:57.450021       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:10:57.450068       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:10:57.450095       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:10:57.450716       1 config.go:309] "Starting node config controller"
	I1123 10:10:57.453315       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:10:57.453391       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:10:57.551138       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:10:57.551176       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:10:57.551220       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cc22d1a213207a0fdc2938062c1f1f5505506d20a750b9890fa2b63926bbbfa7] <==
	I1123 10:10:50.851094       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:10:55.509904       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:10:55.510012       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:10:55.510046       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:10:55.510087       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:10:55.646314       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:10:55.646345       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:10:55.661828       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:10:55.661933       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:10:55.661979       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:10:55.661995       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:10:55.762561       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:10:56 no-preload-020224 kubelet[781]: W1123 10:10:56.442847     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/crio-504da6166ae026de6e6cdacb49e0c316669e161270dbbfb4f0debc8077dce31b WatchSource:0}: Error finding container 504da6166ae026de6e6cdacb49e0c316669e161270dbbfb4f0debc8077dce31b: Status 404 returned error can't find the container with id 504da6166ae026de6e6cdacb49e0c316669e161270dbbfb4f0debc8077dce31b
	Nov 23 10:10:56 no-preload-020224 kubelet[781]: W1123 10:10:56.443406     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/crio-2ea1e4656660b9e009a8ab68c677aa129f9147c619dbc4c47c4d7b691d4d6a6e WatchSource:0}: Error finding container 2ea1e4656660b9e009a8ab68c677aa129f9147c619dbc4c47c4d7b691d4d6a6e: Status 404 returned error can't find the container with id 2ea1e4656660b9e009a8ab68c677aa129f9147c619dbc4c47c4d7b691d4d6a6e
	Nov 23 10:10:59 no-preload-020224 kubelet[781]: I1123 10:10:59.933809     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w9zm\" (UniqueName: \"kubernetes.io/projected/c3920bf6-1c4d-4052-b857-79560bb6954b-kube-api-access-8w9zm\") pod \"kubernetes-dashboard-855c9754f9-n54fr\" (UID: \"c3920bf6-1c4d-4052-b857-79560bb6954b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n54fr"
	Nov 23 10:10:59 no-preload-020224 kubelet[781]: I1123 10:10:59.934309     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a90ef1aa-01a6-46d1-bbcc-c09d2e529547-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-j5kdr\" (UID: \"a90ef1aa-01a6-46d1-bbcc-c09d2e529547\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr"
	Nov 23 10:10:59 no-preload-020224 kubelet[781]: I1123 10:10:59.934404     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c3920bf6-1c4d-4052-b857-79560bb6954b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-n54fr\" (UID: \"c3920bf6-1c4d-4052-b857-79560bb6954b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n54fr"
	Nov 23 10:10:59 no-preload-020224 kubelet[781]: I1123 10:10:59.934492     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8229w\" (UniqueName: \"kubernetes.io/projected/a90ef1aa-01a6-46d1-bbcc-c09d2e529547-kube-api-access-8229w\") pod \"dashboard-metrics-scraper-6ffb444bf9-j5kdr\" (UID: \"a90ef1aa-01a6-46d1-bbcc-c09d2e529547\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr"
	Nov 23 10:11:14 no-preload-020224 kubelet[781]: I1123 10:11:14.204763     781 scope.go:117] "RemoveContainer" containerID="05d633285dfade4a0cc3bdec255cf2a35aa20f7d6bced0dabc12c550722f49cc"
	Nov 23 10:11:14 no-preload-020224 kubelet[781]: I1123 10:11:14.230295     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n54fr" podStartSLOduration=8.524733359 podStartE2EDuration="15.230244893s" podCreationTimestamp="2025-11-23 10:10:59 +0000 UTC" firstStartedPulling="2025-11-23 10:11:00.29981658 +0000 UTC m=+12.606916894" lastFinishedPulling="2025-11-23 10:11:07.005328122 +0000 UTC m=+19.312428428" observedRunningTime="2025-11-23 10:11:07.19900725 +0000 UTC m=+19.506107556" watchObservedRunningTime="2025-11-23 10:11:14.230244893 +0000 UTC m=+26.537345198"
	Nov 23 10:11:15 no-preload-020224 kubelet[781]: I1123 10:11:15.208980     781 scope.go:117] "RemoveContainer" containerID="05d633285dfade4a0cc3bdec255cf2a35aa20f7d6bced0dabc12c550722f49cc"
	Nov 23 10:11:15 no-preload-020224 kubelet[781]: I1123 10:11:15.209311     781 scope.go:117] "RemoveContainer" containerID="413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae"
	Nov 23 10:11:15 no-preload-020224 kubelet[781]: E1123 10:11:15.210247     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j5kdr_kubernetes-dashboard(a90ef1aa-01a6-46d1-bbcc-c09d2e529547)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr" podUID="a90ef1aa-01a6-46d1-bbcc-c09d2e529547"
	Nov 23 10:11:16 no-preload-020224 kubelet[781]: I1123 10:11:16.212774     781 scope.go:117] "RemoveContainer" containerID="413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae"
	Nov 23 10:11:16 no-preload-020224 kubelet[781]: E1123 10:11:16.212973     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j5kdr_kubernetes-dashboard(a90ef1aa-01a6-46d1-bbcc-c09d2e529547)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr" podUID="a90ef1aa-01a6-46d1-bbcc-c09d2e529547"
	Nov 23 10:11:20 no-preload-020224 kubelet[781]: I1123 10:11:20.109516     781 scope.go:117] "RemoveContainer" containerID="413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae"
	Nov 23 10:11:20 no-preload-020224 kubelet[781]: E1123 10:11:20.110175     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j5kdr_kubernetes-dashboard(a90ef1aa-01a6-46d1-bbcc-c09d2e529547)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr" podUID="a90ef1aa-01a6-46d1-bbcc-c09d2e529547"
	Nov 23 10:11:27 no-preload-020224 kubelet[781]: I1123 10:11:27.242241     781 scope.go:117] "RemoveContainer" containerID="e17889e3bbe35de35cd8f26268b7a93a6ea26479b3e8840877416b118ac06f7c"
	Nov 23 10:11:34 no-preload-020224 kubelet[781]: I1123 10:11:34.909825     781 scope.go:117] "RemoveContainer" containerID="413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae"
	Nov 23 10:11:35 no-preload-020224 kubelet[781]: I1123 10:11:35.264723     781 scope.go:117] "RemoveContainer" containerID="413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae"
	Nov 23 10:11:35 no-preload-020224 kubelet[781]: I1123 10:11:35.265329     781 scope.go:117] "RemoveContainer" containerID="35353a7fdcee3b1c11f15830f6a44f93a5172d7e420261be0ec4df87bb885de5"
	Nov 23 10:11:35 no-preload-020224 kubelet[781]: E1123 10:11:35.265539     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j5kdr_kubernetes-dashboard(a90ef1aa-01a6-46d1-bbcc-c09d2e529547)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr" podUID="a90ef1aa-01a6-46d1-bbcc-c09d2e529547"
	Nov 23 10:11:40 no-preload-020224 kubelet[781]: I1123 10:11:40.112474     781 scope.go:117] "RemoveContainer" containerID="35353a7fdcee3b1c11f15830f6a44f93a5172d7e420261be0ec4df87bb885de5"
	Nov 23 10:11:40 no-preload-020224 kubelet[781]: E1123 10:11:40.112665     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j5kdr_kubernetes-dashboard(a90ef1aa-01a6-46d1-bbcc-c09d2e529547)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr" podUID="a90ef1aa-01a6-46d1-bbcc-c09d2e529547"
	Nov 23 10:11:53 no-preload-020224 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:11:53 no-preload-020224 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:11:53 no-preload-020224 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a1167a40e0a499dbfedc0b42aed6734a35d1bc25e255a90cfffe0a0e7023eb30] <==
	2025/11/23 10:11:07 Using namespace: kubernetes-dashboard
	2025/11/23 10:11:07 Using in-cluster config to connect to apiserver
	2025/11/23 10:11:07 Using secret token for csrf signing
	2025/11/23 10:11:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:11:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:11:07 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 10:11:07 Generating JWE encryption key
	2025/11/23 10:11:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:11:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:11:07 Initializing JWE encryption key from synchronized object
	2025/11/23 10:11:07 Creating in-cluster Sidecar client
	2025/11/23 10:11:07 Serving insecurely on HTTP port: 9090
	2025/11/23 10:11:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:11:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:11:07 Starting overwatch
	
	
	==> storage-provisioner [7f683b37fb2e222c6e33a53d3dd7bc514b2b5e218719c2446f59bd1db11e26f1] <==
	I1123 10:11:27.311025       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:11:27.311099       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:11:27.314030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:30.768779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:35.030726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:38.629286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:41.683355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:44.705538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:44.712683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:11:44.713058       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:11:44.713243       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-020224_fccf3123-3411-4345-ac68-e05ed271f5f5!
	I1123 10:11:44.714132       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"160d8384-48d9-41be-8c08-06b5acefeeea", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-020224_fccf3123-3411-4345-ac68-e05ed271f5f5 became leader
	W1123 10:11:44.719670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:44.725358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:11:44.814215       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-020224_fccf3123-3411-4345-ac68-e05ed271f5f5!
	W1123 10:11:46.728409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:46.735142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:48.739155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:48.743412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:50.747142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:50.754859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:52.758699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:52.765098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:54.769293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:54.776065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e17889e3bbe35de35cd8f26268b7a93a6ea26479b3e8840877416b118ac06f7c] <==
	I1123 10:10:57.034067       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:11:27.036210       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-020224 -n no-preload-020224
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-020224 -n no-preload-020224: exit status 2 (381.88525ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-020224 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-020224
helpers_test.go:243: (dbg) docker inspect no-preload-020224:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0",
	        "Created": "2025-11-23T10:09:02.634228682Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 514606,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:10:39.952872433Z",
	            "FinishedAt": "2025-11-23T10:10:38.856432398Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/hostname",
	        "HostsPath": "/var/lib/docker/containers/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/hosts",
	        "LogPath": "/var/lib/docker/containers/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0-json.log",
	        "Name": "/no-preload-020224",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-020224:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-020224",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0",
	                "LowerDir": "/var/lib/docker/overlay2/fa5d3a25bcb7f58c03a8da4f93eb6974e9507a851f3a34e8ca39457b619a17bf-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa5d3a25bcb7f58c03a8da4f93eb6974e9507a851f3a34e8ca39457b619a17bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa5d3a25bcb7f58c03a8da4f93eb6974e9507a851f3a34e8ca39457b619a17bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa5d3a25bcb7f58c03a8da4f93eb6974e9507a851f3a34e8ca39457b619a17bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-020224",
	                "Source": "/var/lib/docker/volumes/no-preload-020224/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-020224",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-020224",
	                "name.minikube.sigs.k8s.io": "no-preload-020224",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f1797686c00aa2c07fa10dc986715c7e7be8bdf0445b6bc8ff9185c84e2a1d11",
	            "SandboxKey": "/var/run/docker/netns/f1797686c00a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-020224": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:e4:39:30:cb:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5bdf554cce75de475d0aa700ed33b59629266aa02ea95fbb3579c79c5e0148ad",
	                    "EndpointID": "7cb3aa10bc93c76e648665a5884b53e6d7303f50384cca6c6d37b3dcaac34f6b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-020224",
	                        "18d5b0a18428"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020224 -n no-preload-020224
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020224 -n no-preload-020224: exit status 2 (370.096319ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-020224 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-020224 logs -n 25: (1.395713729s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p calico-507563 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo containerd config dump                                                                                                                                                                                                  │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo crio config                                                                                                                                                                                                             │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ delete  │ -p calico-507563                                                                                                                                                                                                                              │ calico-507563          │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	│ stop    │ -p old-k8s-version-706028 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-706028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p old-k8s-version-706028 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-020224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ stop    │ -p no-preload-020224 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ image   │ old-k8s-version-706028 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ pause   │ -p old-k8s-version-706028 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020224 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:11 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028 │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990     │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ image   │ no-preload-020224 image list --format=json                                                                                                                                                                                                    │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:11 UTC │
	│ pause   │ -p no-preload-020224 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020224      │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:10:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:10:46.943623  516347 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:10:46.944095  516347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:46.944106  516347 out.go:374] Setting ErrFile to fd 2...
	I1123 10:10:46.944110  516347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:10:46.944378  516347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:10:46.944794  516347 out.go:368] Setting JSON to false
	I1123 10:10:46.945684  516347 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10396,"bootTime":1763882251,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:10:46.945763  516347 start.go:143] virtualization:  
	I1123 10:10:46.949103  516347 out.go:179] * [embed-certs-566990] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:10:46.952935  516347 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:10:46.953195  516347 notify.go:221] Checking for updates...
	I1123 10:10:46.958846  516347 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:10:46.961774  516347 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:10:46.964654  516347 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:10:46.967423  516347 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:10:46.970231  516347 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:10:46.973688  516347 config.go:182] Loaded profile config "no-preload-020224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:46.973843  516347 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:10:47.015038  516347 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:10:47.015216  516347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:10:47.080126  516347 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 10:10:47.070054735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:10:47.080230  516347 docker.go:319] overlay module found
	I1123 10:10:47.083788  516347 out.go:179] * Using the docker driver based on user configuration
	I1123 10:10:47.086759  516347 start.go:309] selected driver: docker
	I1123 10:10:47.086778  516347 start.go:927] validating driver "docker" against <nil>
	I1123 10:10:47.086792  516347 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:10:47.087462  516347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:10:47.171749  516347 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 10:10:47.156932254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:10:47.171905  516347 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:10:47.172118  516347 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:10:47.175026  516347 out.go:179] * Using Docker driver with root privileges
	I1123 10:10:47.177976  516347 cni.go:84] Creating CNI manager for ""
	I1123 10:10:47.178047  516347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:10:47.178055  516347 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:10:47.178135  516347 start.go:353] cluster config:
	{Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:10:47.181303  516347 out.go:179] * Starting "embed-certs-566990" primary control-plane node in "embed-certs-566990" cluster
	I1123 10:10:47.184256  516347 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:10:47.187222  516347 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:10:47.190038  516347 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:10:47.190091  516347 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:10:47.190100  516347 cache.go:65] Caching tarball of preloaded images
	I1123 10:10:47.190174  516347 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:10:47.190183  516347 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:10:47.190288  516347 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/config.json ...
	I1123 10:10:47.190306  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/config.json: {Name:mk9f0c217c2ecd7bc9f554d07a2532acdc5529fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:47.190456  516347 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:10:47.211788  516347 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:10:47.211808  516347 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:10:47.211823  516347 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:10:47.211852  516347 start.go:360] acquireMachinesLock for embed-certs-566990: {Name:mkc766faecda88b98c3d85f6aada2ef6121554c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:10:47.211956  516347 start.go:364] duration metric: took 88.797µs to acquireMachinesLock for "embed-certs-566990"
	I1123 10:10:47.211985  516347 start.go:93] Provisioning new machine with config: &{Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:10:47.212047  516347 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:10:44.617080  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:10:44.635943  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:10:44.653535  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:10:44.671259  514436 provision.go:87] duration metric: took 785.346327ms to configureAuth
	I1123 10:10:44.671335  514436 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:10:44.671563  514436 config.go:182] Loaded profile config "no-preload-020224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:44.671717  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:44.688691  514436 main.go:143] libmachine: Using SSH client type: native
	I1123 10:10:44.689010  514436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33476 <nil> <nil>}
	I1123 10:10:44.689024  514436 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:10:45.125004  514436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:10:45.125029  514436 machine.go:97] duration metric: took 4.8207253s to provisionDockerMachine
	I1123 10:10:45.125044  514436 start.go:293] postStartSetup for "no-preload-020224" (driver="docker")
	I1123 10:10:45.125055  514436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:10:45.125136  514436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:10:45.125188  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:45.148895  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:45.303904  514436 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:10:45.309213  514436 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:10:45.309307  514436 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:10:45.309334  514436 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:10:45.309471  514436 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:10:45.309625  514436 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:10:45.309797  514436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:10:45.321532  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:10:45.352168  514436 start.go:296] duration metric: took 227.108629ms for postStartSetup
	I1123 10:10:45.352597  514436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:10:45.352949  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:45.376291  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:45.523855  514436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:10:45.530265  514436 fix.go:56] duration metric: took 5.645271712s for fixHost
	I1123 10:10:45.530295  514436 start.go:83] releasing machines lock for "no-preload-020224", held for 5.645325193s
	I1123 10:10:45.530395  514436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-020224
	I1123 10:10:45.551737  514436 ssh_runner.go:195] Run: cat /version.json
	I1123 10:10:45.551788  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:45.552065  514436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:10:45.552133  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:45.583405  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:45.606973  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:45.705128  514436 ssh_runner.go:195] Run: systemctl --version
	I1123 10:10:45.812484  514436 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:10:45.852026  514436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:10:45.856782  514436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:10:45.856903  514436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:10:45.865720  514436 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:10:45.865785  514436 start.go:496] detecting cgroup driver to use...
	I1123 10:10:45.865834  514436 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:10:45.865911  514436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:10:45.882009  514436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:10:45.896593  514436 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:10:45.896699  514436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:10:45.912756  514436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:10:45.927649  514436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:10:46.074565  514436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:10:46.234048  514436 docker.go:234] disabling docker service ...
	I1123 10:10:46.234157  514436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:10:46.257722  514436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:10:46.291789  514436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:10:46.436230  514436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:10:46.591064  514436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:10:46.605618  514436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:10:46.623348  514436 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:10:46.623450  514436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.632716  514436 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:10:46.632807  514436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.642542  514436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.653124  514436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.663094  514436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:10:46.674854  514436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.683746  514436 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.692405  514436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:10:46.703419  514436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:10:46.711400  514436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:10:46.719058  514436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:10:46.864803  514436 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:10:47.086604  514436 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:10:47.086668  514436 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:10:47.091907  514436 start.go:564] Will wait 60s for crictl version
	I1123 10:10:47.091966  514436 ssh_runner.go:195] Run: which crictl
	I1123 10:10:47.096984  514436 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:10:47.143426  514436 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:10:47.143515  514436 ssh_runner.go:195] Run: crio --version
	I1123 10:10:47.183860  514436 ssh_runner.go:195] Run: crio --version
	I1123 10:10:47.235684  514436 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:10:47.238585  514436 cli_runner.go:164] Run: docker network inspect no-preload-020224 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:10:47.256453  514436 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 10:10:47.263095  514436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:10:47.275005  514436 kubeadm.go:884] updating cluster {Name:no-preload-020224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:10:47.275129  514436 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:10:47.275177  514436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:10:47.337659  514436 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:10:47.337679  514436 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:10:47.337687  514436 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 10:10:47.337781  514436 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-020224 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:10:47.337861  514436 ssh_runner.go:195] Run: crio config
	I1123 10:10:47.432640  514436 cni.go:84] Creating CNI manager for ""
	I1123 10:10:47.432664  514436 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:10:47.432679  514436 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:10:47.432702  514436 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-020224 NodeName:no-preload-020224 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:10:47.432832  514436 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-020224"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:10:47.432904  514436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:10:47.442488  514436 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:10:47.442573  514436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:10:47.455780  514436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:10:47.486191  514436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:10:47.508272  514436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1123 10:10:47.522369  514436 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:10:47.526158  514436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:10:47.537244  514436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:10:47.669646  514436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:10:47.693633  514436 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224 for IP: 192.168.85.2
	I1123 10:10:47.693651  514436 certs.go:195] generating shared ca certs ...
	I1123 10:10:47.693666  514436 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:47.693799  514436 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:10:47.693843  514436 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:10:47.693850  514436 certs.go:257] generating profile certs ...
	I1123 10:10:47.693928  514436 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/client.key
	I1123 10:10:47.693997  514436 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key.d87566b3
	I1123 10:10:47.694034  514436 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.key
	I1123 10:10:47.694137  514436 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:10:47.694166  514436 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:10:47.694174  514436 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:10:47.694200  514436 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:10:47.694225  514436 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:10:47.694248  514436 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:10:47.694321  514436 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:10:47.694936  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:10:47.722692  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:10:47.756799  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:10:47.788242  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:10:47.818128  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:10:47.843448  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:10:47.888147  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:10:47.967060  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/no-preload-020224/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:10:47.989147  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:10:48.025807  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:10:48.058047  514436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:10:48.087089  514436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:10:48.115476  514436 ssh_runner.go:195] Run: openssl version
	I1123 10:10:48.121938  514436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:10:48.132750  514436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:10:48.137665  514436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:10:48.137725  514436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:10:48.196309  514436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:10:48.220618  514436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:10:48.229014  514436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:10:48.232930  514436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:10:48.233053  514436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:10:48.277865  514436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:10:48.288957  514436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:10:48.297761  514436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:10:48.302227  514436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:10:48.302307  514436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:10:48.383276  514436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:10:48.409555  514436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:10:48.424766  514436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:10:48.544261  514436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:10:48.638209  514436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:10:48.710619  514436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:10:48.779780  514436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:10:48.854733  514436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:10:48.968489  514436 kubeadm.go:401] StartCluster: {Name:no-preload-020224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-020224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:10:48.968604  514436 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:10:48.968674  514436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:10:49.079770  514436 cri.go:89] found id: "fde673b61a03b720e2492ddf051014b494251142f14b9bcf92cb9b5416dc9304"
	I1123 10:10:49.079803  514436 cri.go:89] found id: "e20d0c00b09f6363ed0697d9006ccbda9d1b29842f0e683983474b226e898361"
	I1123 10:10:49.079808  514436 cri.go:89] found id: "cc22d1a213207a0fdc2938062c1f1f5505506d20a750b9890fa2b63926bbbfa7"
	I1123 10:10:49.079812  514436 cri.go:89] found id: "ec9f0e1b62e29a096907f8e55276c570c3b3ba64c77efeee2ee959d0dec0f641"
	I1123 10:10:49.079815  514436 cri.go:89] found id: ""
	I1123 10:10:49.079863  514436 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:10:49.108972  514436 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:10:49Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:10:49.109067  514436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:10:49.137797  514436 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:10:49.137829  514436 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:10:49.137880  514436 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:10:49.146502  514436 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:10:49.146940  514436 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-020224" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:10:49.148070  514436 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-020224" cluster setting kubeconfig missing "no-preload-020224" context setting]
	I1123 10:10:49.148405  514436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:49.150103  514436 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:10:49.162470  514436 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 10:10:49.162506  514436 kubeadm.go:602] duration metric: took 24.669678ms to restartPrimaryControlPlane
	I1123 10:10:49.162516  514436 kubeadm.go:403] duration metric: took 194.038688ms to StartCluster
	I1123 10:10:49.162543  514436 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:49.162618  514436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:10:49.163776  514436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:10:49.165208  514436 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:10:49.165480  514436 config.go:182] Loaded profile config "no-preload-020224": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:49.165563  514436 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:10:49.165874  514436 addons.go:70] Setting storage-provisioner=true in profile "no-preload-020224"
	I1123 10:10:49.165894  514436 addons.go:239] Setting addon storage-provisioner=true in "no-preload-020224"
	W1123 10:10:49.165900  514436 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:10:49.165929  514436 host.go:66] Checking if "no-preload-020224" exists ...
	I1123 10:10:49.166377  514436 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:10:49.166546  514436 addons.go:70] Setting dashboard=true in profile "no-preload-020224"
	I1123 10:10:49.166577  514436 addons.go:239] Setting addon dashboard=true in "no-preload-020224"
	W1123 10:10:49.166584  514436 addons.go:248] addon dashboard should already be in state true
	I1123 10:10:49.166608  514436 host.go:66] Checking if "no-preload-020224" exists ...
	I1123 10:10:49.166887  514436 addons.go:70] Setting default-storageclass=true in profile "no-preload-020224"
	I1123 10:10:49.166904  514436 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-020224"
	I1123 10:10:49.167026  514436 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:10:49.167405  514436 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:10:49.177669  514436 out.go:179] * Verifying Kubernetes components...
	I1123 10:10:49.181016  514436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:10:49.223759  514436 addons.go:239] Setting addon default-storageclass=true in "no-preload-020224"
	W1123 10:10:49.223781  514436 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:10:49.223806  514436 host.go:66] Checking if "no-preload-020224" exists ...
	I1123 10:10:49.224227  514436 cli_runner.go:164] Run: docker container inspect no-preload-020224 --format={{.State.Status}}
	I1123 10:10:49.235610  514436 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:10:49.241658  514436 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:10:49.251488  514436 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:10:49.251571  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:10:49.251582  514436 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:10:49.251649  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:49.255528  514436 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:10:49.255555  514436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:10:49.255651  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:49.279905  514436 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:10:49.279929  514436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:10:49.279989  514436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-020224
	I1123 10:10:49.306700  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:49.324024  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:49.326240  514436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33476 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/no-preload-020224/id_rsa Username:docker}
	I1123 10:10:47.215352  516347 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:10:47.215611  516347 start.go:159] libmachine.API.Create for "embed-certs-566990" (driver="docker")
	I1123 10:10:47.215659  516347 client.go:173] LocalClient.Create starting
	I1123 10:10:47.215733  516347 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem
	I1123 10:10:47.215768  516347 main.go:143] libmachine: Decoding PEM data...
	I1123 10:10:47.215788  516347 main.go:143] libmachine: Parsing certificate...
	I1123 10:10:47.215847  516347 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem
	I1123 10:10:47.215869  516347 main.go:143] libmachine: Decoding PEM data...
	I1123 10:10:47.215887  516347 main.go:143] libmachine: Parsing certificate...
	I1123 10:10:47.216287  516347 cli_runner.go:164] Run: docker network inspect embed-certs-566990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:10:47.233333  516347 cli_runner.go:211] docker network inspect embed-certs-566990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:10:47.233464  516347 network_create.go:284] running [docker network inspect embed-certs-566990] to gather additional debugging logs...
	I1123 10:10:47.233491  516347 cli_runner.go:164] Run: docker network inspect embed-certs-566990
	W1123 10:10:47.250946  516347 cli_runner.go:211] docker network inspect embed-certs-566990 returned with exit code 1
	I1123 10:10:47.250984  516347 network_create.go:287] error running [docker network inspect embed-certs-566990]: docker network inspect embed-certs-566990: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-566990 not found
	I1123 10:10:47.251052  516347 network_create.go:289] output of [docker network inspect embed-certs-566990]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-566990 not found
	
	** /stderr **
	I1123 10:10:47.251189  516347 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:10:47.278028  516347 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d56166f18c3a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:f2:0f:1a:18:9c} reservation:<nil>}
	I1123 10:10:47.278403  516347 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe6f7fd59576 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:8b:f7:8e:2b:59} reservation:<nil>}
	I1123 10:10:47.278654  516347 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c262e08021b1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:16:63:f0:32:b6} reservation:<nil>}
	I1123 10:10:47.279071  516347 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a9e50}
	I1123 10:10:47.279096  516347 network_create.go:124] attempt to create docker network embed-certs-566990 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 10:10:47.279153  516347 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-566990 embed-certs-566990
	I1123 10:10:47.353311  516347 network_create.go:108] docker network embed-certs-566990 192.168.76.0/24 created
	I1123 10:10:47.353345  516347 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-566990" container
	I1123 10:10:47.353521  516347 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:10:47.370842  516347 cli_runner.go:164] Run: docker volume create embed-certs-566990 --label name.minikube.sigs.k8s.io=embed-certs-566990 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:10:47.389735  516347 oci.go:103] Successfully created a docker volume embed-certs-566990
	I1123 10:10:47.389828  516347 cli_runner.go:164] Run: docker run --rm --name embed-certs-566990-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-566990 --entrypoint /usr/bin/test -v embed-certs-566990:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:10:48.006318  516347 oci.go:107] Successfully prepared a docker volume embed-certs-566990
	I1123 10:10:48.006398  516347 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:10:48.006409  516347 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:10:48.006483  516347 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-566990:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:10:49.631921  514436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:10:49.663125  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:10:49.663147  514436 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:10:49.667526  514436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:10:49.684895  514436 node_ready.go:35] waiting up to 6m0s for node "no-preload-020224" to be "Ready" ...
	I1123 10:10:49.707342  514436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:10:49.715217  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:10:49.715290  514436 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:10:49.776300  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:10:49.776383  514436 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:10:49.883164  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:10:49.883238  514436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:10:49.959486  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:10:49.959562  514436 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:10:50.043736  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:10:50.043812  514436 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:10:50.078917  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:10:50.078993  514436 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:10:50.117342  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:10:50.117432  514436 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:10:50.155429  514436 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:10:50.155506  514436 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:10:50.189318  514436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:10:53.762090  516347 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-566990:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.75555957s)
	I1123 10:10:53.762128  516347 kic.go:203] duration metric: took 5.755706051s to extract preloaded images to volume ...
	W1123 10:10:53.762260  516347 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 10:10:53.762363  516347 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:10:53.854067  516347 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-566990 --name embed-certs-566990 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-566990 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-566990 --network embed-certs-566990 --ip 192.168.76.2 --volume embed-certs-566990:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:10:54.272102  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Running}}
	I1123 10:10:54.302309  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:10:54.341098  516347 cli_runner.go:164] Run: docker exec embed-certs-566990 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:10:54.411187  516347 oci.go:144] the created container "embed-certs-566990" has a running status.
	I1123 10:10:54.411215  516347 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa...
	I1123 10:10:55.051678  516347 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:10:55.075515  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:10:55.107357  516347 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:10:55.107376  516347 kic_runner.go:114] Args: [docker exec --privileged embed-certs-566990 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:10:55.225469  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:10:55.252832  516347 machine.go:94] provisionDockerMachine start ...
	I1123 10:10:55.252941  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:55.277519  516347 main.go:143] libmachine: Using SSH client type: native
	I1123 10:10:55.277854  516347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33481 <nil> <nil>}
	I1123 10:10:55.277870  516347 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:10:55.278549  516347 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:10:55.553037  514436 node_ready.go:49] node "no-preload-020224" is "Ready"
	I1123 10:10:55.553069  514436 node_ready.go:38] duration metric: took 5.868098777s for node "no-preload-020224" to be "Ready" ...
	I1123 10:10:55.553084  514436 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:10:55.553145  514436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:10:57.506904  514436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.839346468s)
	I1123 10:10:57.506967  514436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.799554893s)
	I1123 10:10:57.507219  514436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.317822949s)
	I1123 10:10:57.507436  514436 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.954274127s)
	I1123 10:10:57.507485  514436 api_server.go:72] duration metric: took 8.341930566s to wait for apiserver process to appear ...
	I1123 10:10:57.507509  514436 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:10:57.507540  514436 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:10:57.510436  514436 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-020224 addons enable metrics-server
	
	I1123 10:10:57.515548  514436 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:10:57.516886  514436 api_server.go:141] control plane version: v1.34.1
	I1123 10:10:57.516907  514436 api_server.go:131] duration metric: took 9.378292ms to wait for apiserver health ...
	I1123 10:10:57.516915  514436 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:10:57.521608  514436 system_pods.go:59] 8 kube-system pods found
	I1123 10:10:57.521650  514436 system_pods.go:61] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:57.521661  514436 system_pods.go:61] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:10:57.521668  514436 system_pods.go:61] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:57.521675  514436 system_pods.go:61] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:10:57.521681  514436 system_pods.go:61] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:10:57.521694  514436 system_pods.go:61] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:57.521700  514436 system_pods.go:61] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:10:57.521704  514436 system_pods.go:61] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Running
	I1123 10:10:57.521710  514436 system_pods.go:74] duration metric: took 4.789177ms to wait for pod list to return data ...
	I1123 10:10:57.521721  514436 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:10:57.522362  514436 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 10:10:57.524681  514436 default_sa.go:45] found service account: "default"
	I1123 10:10:57.524707  514436 default_sa.go:55] duration metric: took 2.979578ms for default service account to be created ...
	I1123 10:10:57.524722  514436 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:10:57.525200  514436 addons.go:530] duration metric: took 8.359637942s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:10:57.527562  514436 system_pods.go:86] 8 kube-system pods found
	I1123 10:10:57.527597  514436 system_pods.go:89] "coredns-66bc5c9577-v59bz" [9cd5752f-f6a3-4db9-a644-1c18ff268642] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:10:57.527607  514436 system_pods.go:89] "etcd-no-preload-020224" [8dccbade-8a60-4d0f-9676-d6a2755663f9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:10:57.527613  514436 system_pods.go:89] "kindnet-ghq9t" [a82575e8-2a03-4722-8611-dab3ceda4f39] Running
	I1123 10:10:57.527624  514436 system_pods.go:89] "kube-apiserver-no-preload-020224" [a7f60049-0c2f-4359-9d93-d13658d03d02] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:10:57.527633  514436 system_pods.go:89] "kube-controller-manager-no-preload-020224" [8a60d5f3-d38b-408b-ac99-8e9e3cc1da22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:10:57.527641  514436 system_pods.go:89] "kube-proxy-7s6pf" [54924ab5-094f-48de-8483-f31455e53773] Running
	I1123 10:10:57.527659  514436 system_pods.go:89] "kube-scheduler-no-preload-020224" [313e344b-1c48-4c74-8237-387cff8a8c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:10:57.527674  514436 system_pods.go:89] "storage-provisioner" [6796ee0a-02e3-4c46-a03b-115136ad2780] Running
	I1123 10:10:57.527683  514436 system_pods.go:126] duration metric: took 2.95411ms to wait for k8s-apps to be running ...
	I1123 10:10:57.527693  514436 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:10:57.527749  514436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:10:57.541369  514436 system_svc.go:56] duration metric: took 13.66648ms WaitForService to wait for kubelet
	I1123 10:10:57.541399  514436 kubeadm.go:587] duration metric: took 8.375860001s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:10:57.541447  514436 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:10:57.544326  514436 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:10:57.544356  514436 node_conditions.go:123] node cpu capacity is 2
	I1123 10:10:57.544369  514436 node_conditions.go:105] duration metric: took 2.916677ms to run NodePressure ...
	I1123 10:10:57.544382  514436 start.go:242] waiting for startup goroutines ...
	I1123 10:10:57.544390  514436 start.go:247] waiting for cluster config update ...
	I1123 10:10:57.544401  514436 start.go:256] writing updated cluster config ...
	I1123 10:10:57.544686  514436 ssh_runner.go:195] Run: rm -f paused
	I1123 10:10:57.548536  514436 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:10:57.551954  514436 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v59bz" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:10:59.562369  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:10:58.433253  516347 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-566990
	
	I1123 10:10:58.433279  516347 ubuntu.go:182] provisioning hostname "embed-certs-566990"
	I1123 10:10:58.433369  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:58.451408  516347 main.go:143] libmachine: Using SSH client type: native
	I1123 10:10:58.451740  516347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33481 <nil> <nil>}
	I1123 10:10:58.451755  516347 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-566990 && echo "embed-certs-566990" | sudo tee /etc/hostname
	I1123 10:10:58.618144  516347 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-566990
	
	I1123 10:10:58.618243  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:58.636752  516347 main.go:143] libmachine: Using SSH client type: native
	I1123 10:10:58.637113  516347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33481 <nil> <nil>}
	I1123 10:10:58.637136  516347 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-566990' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-566990/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-566990' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:10:58.789683  516347 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:10:58.789706  516347 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:10:58.789726  516347 ubuntu.go:190] setting up certificates
	I1123 10:10:58.789736  516347 provision.go:84] configureAuth start
	I1123 10:10:58.789808  516347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:10:58.807140  516347 provision.go:143] copyHostCerts
	I1123 10:10:58.807205  516347 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:10:58.807214  516347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:10:58.807289  516347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:10:58.807383  516347 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:10:58.807388  516347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:10:58.807415  516347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:10:58.807464  516347 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:10:58.807468  516347 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:10:58.807493  516347 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:10:58.807536  516347 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.embed-certs-566990 san=[127.0.0.1 192.168.76.2 embed-certs-566990 localhost minikube]
	I1123 10:10:59.148898  516347 provision.go:177] copyRemoteCerts
	I1123 10:10:59.148974  516347 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:10:59.149030  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:59.167355  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:10:59.274720  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 10:10:59.293984  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:10:59.320633  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:10:59.339630  516347 provision.go:87] duration metric: took 549.87034ms to configureAuth
	I1123 10:10:59.339661  516347 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:10:59.339850  516347 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:10:59.339959  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:59.367088  516347 main.go:143] libmachine: Using SSH client type: native
	I1123 10:10:59.367405  516347 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33481 <nil> <nil>}
	I1123 10:10:59.367430  516347 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:10:59.737195  516347 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:10:59.737270  516347 machine.go:97] duration metric: took 4.484402073s to provisionDockerMachine
	I1123 10:10:59.737297  516347 client.go:176] duration metric: took 12.521626424s to LocalClient.Create
	I1123 10:10:59.737354  516347 start.go:167] duration metric: took 12.521743357s to libmachine.API.Create "embed-certs-566990"
	I1123 10:10:59.737381  516347 start.go:293] postStartSetup for "embed-certs-566990" (driver="docker")
	I1123 10:10:59.737436  516347 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:10:59.737536  516347 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:10:59.737610  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:59.764147  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:10:59.886438  516347 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:10:59.891709  516347 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:10:59.891736  516347 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:10:59.891747  516347 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:10:59.891803  516347 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:10:59.891903  516347 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:10:59.892017  516347 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:10:59.902530  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:10:59.924741  516347 start.go:296] duration metric: took 187.32662ms for postStartSetup
	I1123 10:10:59.925218  516347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:10:59.952562  516347 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/config.json ...
	I1123 10:10:59.952830  516347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:10:59.952874  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:10:59.984026  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:11:00.151543  516347 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:11:00.160148  516347 start.go:128] duration metric: took 12.948086274s to createHost
	I1123 10:11:00.160177  516347 start.go:83] releasing machines lock for "embed-certs-566990", held for 12.94821266s
	I1123 10:11:00.160284  516347 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:11:00.204990  516347 ssh_runner.go:195] Run: cat /version.json
	I1123 10:11:00.205055  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:11:00.205528  516347 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:11:00.205604  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:11:00.294182  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:11:00.312033  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:11:00.527782  516347 ssh_runner.go:195] Run: systemctl --version
	I1123 10:11:00.534488  516347 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:11:00.575240  516347 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:11:00.579625  516347 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:11:00.579724  516347 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:11:00.608651  516347 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 10:11:00.608673  516347 start.go:496] detecting cgroup driver to use...
	I1123 10:11:00.608704  516347 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:11:00.608759  516347 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:11:00.627252  516347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:11:00.642942  516347 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:11:00.643004  516347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:11:00.661493  516347 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:11:00.683610  516347 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:11:00.860965  516347 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:11:01.046572  516347 docker.go:234] disabling docker service ...
	I1123 10:11:01.046638  516347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:11:01.082693  516347 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:11:01.102691  516347 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:11:01.285967  516347 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:11:01.446585  516347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:11:01.468223  516347 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:11:01.484073  516347 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:11:01.484207  516347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.494836  516347 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:11:01.494976  516347 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.504335  516347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.515110  516347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.524788  516347 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:11:01.533869  516347 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.543411  516347 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.560432  516347 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:11:01.570291  516347 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:11:01.579698  516347 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:11:01.588439  516347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:11:01.738002  516347 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:11:01.981137  516347 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:11:01.981261  516347 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:11:01.985871  516347 start.go:564] Will wait 60s for crictl version
	I1123 10:11:01.986019  516347 ssh_runner.go:195] Run: which crictl
	I1123 10:11:01.990305  516347 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:11:02.042519  516347 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:11:02.042682  516347 ssh_runner.go:195] Run: crio --version
	I1123 10:11:02.083213  516347 ssh_runner.go:195] Run: crio --version
	I1123 10:11:02.146013  516347 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:11:02.149054  516347 cli_runner.go:164] Run: docker network inspect embed-certs-566990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:11:02.177744  516347 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:11:02.182049  516347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:11:02.194552  516347 kubeadm.go:884] updating cluster {Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:11:02.194679  516347 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:11:02.194732  516347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:11:02.251925  516347 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:11:02.251944  516347 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:11:02.252002  516347 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:11:02.281436  516347 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:11:02.281459  516347 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:11:02.281467  516347 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:11:02.281562  516347 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-566990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:11:02.281650  516347 ssh_runner.go:195] Run: crio config
	I1123 10:11:02.356539  516347 cni.go:84] Creating CNI manager for ""
	I1123 10:11:02.356562  516347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:11:02.356585  516347 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:11:02.356608  516347 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-566990 NodeName:embed-certs-566990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:11:02.356748  516347 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-566990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:11:02.356820  516347 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:11:02.365758  516347 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:11:02.365840  516347 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:11:02.374550  516347 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 10:11:02.388345  516347 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:11:02.402672  516347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1123 10:11:02.417283  516347 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:11:02.421094  516347 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:11:02.431716  516347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:11:02.588268  516347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:11:02.607044  516347 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990 for IP: 192.168.76.2
	I1123 10:11:02.607067  516347 certs.go:195] generating shared ca certs ...
	I1123 10:11:02.607083  516347 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:02.607222  516347 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:11:02.607273  516347 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:11:02.607282  516347 certs.go:257] generating profile certs ...
	I1123 10:11:02.607342  516347 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.key
	I1123 10:11:02.607359  516347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.crt with IP's: []
	I1123 10:11:03.186151  516347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.crt ...
	I1123 10:11:03.186230  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.crt: {Name:mk310c5f03a9a0317bf7e4490391f5f9334d4c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:03.186471  516347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.key ...
	I1123 10:11:03.186506  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.key: {Name:mkafc12f332c48c6902b0e78ec546ce7c7aab6fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:03.186661  516347 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key.e8338b8a
	I1123 10:11:03.186701  516347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt.e8338b8a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 10:11:03.236918  516347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt.e8338b8a ...
	I1123 10:11:03.236984  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt.e8338b8a: {Name:mkd733a72b4ba50b720215823b349a40bab4c1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:03.237202  516347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key.e8338b8a ...
	I1123 10:11:03.237238  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key.e8338b8a: {Name:mkd79d1a0674af1e548a4eca5efb393ca1ee4981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:03.237456  516347 certs.go:382] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt.e8338b8a -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt
	I1123 10:11:03.237592  516347 certs.go:386] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key.e8338b8a -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key
	I1123 10:11:03.237680  516347 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key
	I1123 10:11:03.237729  516347 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.crt with IP's: []
	I1123 10:11:03.572199  516347 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.crt ...
	I1123 10:11:03.572271  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.crt: {Name:mk9c74dec48e7a852b7547fb65a91236b1e1122b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:03.572485  516347 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key ...
	I1123 10:11:03.572522  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key: {Name:mk939ced4d30b6a615e349eb4c52a44b92624537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:03.572792  516347 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:11:03.573289  516347 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:11:03.573337  516347 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:11:03.573445  516347 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:11:03.573502  516347 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:11:03.573560  516347 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:11:03.573637  516347 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:11:03.574225  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:11:03.596182  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:11:03.636585  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:11:03.668332  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:11:03.707801  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:11:03.726710  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:11:03.746178  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:11:03.767027  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:11:03.787345  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:11:03.808072  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:11:03.827724  516347 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:11:03.848594  516347 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:11:03.862747  516347 ssh_runner.go:195] Run: openssl version
	I1123 10:11:03.870204  516347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:11:03.879386  516347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:11:03.883538  516347 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:11:03.883685  516347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:11:03.937809  516347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:11:03.949103  516347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:11:03.958680  516347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:11:03.962778  516347 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:11:03.962890  516347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:11:04.008107  516347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:11:04.017974  516347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:11:04.027359  516347 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:11:04.031901  516347 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:11:04.032020  516347 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:11:04.080975  516347 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:11:04.090398  516347 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:11:04.095596  516347 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:11:04.095652  516347 kubeadm.go:401] StartCluster: {Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:11:04.095730  516347 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:11:04.095787  516347 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:11:04.133496  516347 cri.go:89] found id: ""
	I1123 10:11:04.133565  516347 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:11:04.144867  516347 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:11:04.155322  516347 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:11:04.155400  516347 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:11:04.167167  516347 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:11:04.167189  516347 kubeadm.go:158] found existing configuration files:
	
	I1123 10:11:04.167252  516347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:11:04.178621  516347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:11:04.178724  516347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:11:04.187132  516347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:11:04.196972  516347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:11:04.197052  516347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:11:04.205516  516347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:11:04.215287  516347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:11:04.215361  516347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:11:04.224889  516347 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:11:04.234735  516347 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:11:04.234801  516347 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:11:04.243401  516347 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:11:04.306192  516347 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:11:04.306254  516347 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:11:04.344065  516347 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:11:04.344148  516347 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:11:04.344189  516347 kubeadm.go:319] OS: Linux
	I1123 10:11:04.344239  516347 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:11:04.344292  516347 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:11:04.344351  516347 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:11:04.344450  516347 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:11:04.344503  516347 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:11:04.344568  516347 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:11:04.344620  516347 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:11:04.344672  516347 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:11:04.344724  516347 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:11:04.490021  516347 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:11:04.490145  516347 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:11:04.490238  516347 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:11:04.499733  516347 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1123 10:11:02.059499  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:04.066773  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:04.507123  516347 out.go:252]   - Generating certificates and keys ...
	I1123 10:11:04.507229  516347 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:11:04.507296  516347 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:11:04.613985  516347 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:11:05.289379  516347 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:11:05.609642  516347 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:11:06.485212  516347 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:11:06.648126  516347 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:11:06.648687  516347 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-566990 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1123 10:11:06.558799  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:08.560116  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:07.227298  516347 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:11:07.227940  516347 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-566990 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 10:11:07.813109  516347 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:11:08.144972  516347 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:11:09.069955  516347 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:11:09.070500  516347 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:11:09.140499  516347 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:11:09.511527  516347 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:11:10.993228  516347 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:11:11.288662  516347 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:11:11.418747  516347 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:11:11.419844  516347 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:11:11.433896  516347 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:11:11.440316  516347 out.go:252]   - Booting up control plane ...
	I1123 10:11:11.440424  516347 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:11:11.440501  516347 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:11:11.440563  516347 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:11:11.473174  516347 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:11:11.473288  516347 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:11:11.488195  516347 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:11:11.488649  516347 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:11:11.488851  516347 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:11:11.649540  516347 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:11:11.649663  516347 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1123 10:11:10.561120  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:13.057399  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:13.150236  516347 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501154668s
	I1123 10:11:13.154218  516347 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:11:13.154317  516347 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 10:11:13.154414  516347 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:11:13.154491  516347 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:11:15.200605  516347 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.045337607s
	W1123 10:11:15.059261  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:17.558309  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:19.558990  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:17.724883  516347 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.570624719s
	I1123 10:11:19.655954  516347 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501628135s
	I1123 10:11:19.681165  516347 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:11:19.714535  516347 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:11:19.750668  516347 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:11:19.750872  516347 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-566990 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:11:19.799973  516347 kubeadm.go:319] [bootstrap-token] Using token: zpd6zu.4cg9pp8coqg7svyt
	I1123 10:11:19.805666  516347 out.go:252]   - Configuring RBAC rules ...
	I1123 10:11:19.805795  516347 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:11:19.813014  516347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:11:19.836820  516347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:11:19.842790  516347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:11:19.848530  516347 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:11:19.861743  516347 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:11:20.063478  516347 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:11:20.506946  516347 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:11:21.062962  516347 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:11:21.064275  516347 kubeadm.go:319] 
	I1123 10:11:21.064343  516347 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:11:21.064350  516347 kubeadm.go:319] 
	I1123 10:11:21.064422  516347 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:11:21.064426  516347 kubeadm.go:319] 
	I1123 10:11:21.064449  516347 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:11:21.064505  516347 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:11:21.064552  516347 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:11:21.064557  516347 kubeadm.go:319] 
	I1123 10:11:21.064607  516347 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:11:21.064611  516347 kubeadm.go:319] 
	I1123 10:11:21.064655  516347 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:11:21.064659  516347 kubeadm.go:319] 
	I1123 10:11:21.064707  516347 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:11:21.064778  516347 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:11:21.064843  516347 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:11:21.064847  516347 kubeadm.go:319] 
	I1123 10:11:21.064926  516347 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:11:21.065002  516347 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:11:21.065037  516347 kubeadm.go:319] 
	I1123 10:11:21.065116  516347 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zpd6zu.4cg9pp8coqg7svyt \
	I1123 10:11:21.065212  516347 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e \
	I1123 10:11:21.065231  516347 kubeadm.go:319] 	--control-plane 
	I1123 10:11:21.065235  516347 kubeadm.go:319] 
	I1123 10:11:21.065315  516347 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:11:21.065319  516347 kubeadm.go:319] 
	I1123 10:11:21.065396  516347 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zpd6zu.4cg9pp8coqg7svyt \
	I1123 10:11:21.065539  516347 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e 
	I1123 10:11:21.069973  516347 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 10:11:21.070209  516347 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 10:11:21.070315  516347 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:11:21.070398  516347 cni.go:84] Creating CNI manager for ""
	I1123 10:11:21.070422  516347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:11:21.075548  516347 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:11:21.078712  516347 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:11:21.082769  516347 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:11:21.082792  516347 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:11:21.103006  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:11:21.437803  516347 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:11:21.437868  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:21.437931  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-566990 minikube.k8s.io/updated_at=2025_11_23T10_11_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=embed-certs-566990 minikube.k8s.io/primary=true
	I1123 10:11:21.604086  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:21.604164  516347 ops.go:34] apiserver oom_adj: -16
	W1123 10:11:22.058188  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:24.557706  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:22.104394  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:22.604220  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:23.104256  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:23.604257  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:24.104111  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:24.604193  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:25.104531  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:25.604155  516347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:11:25.743011  516347 kubeadm.go:1114] duration metric: took 4.305209556s to wait for elevateKubeSystemPrivileges
	I1123 10:11:25.743040  516347 kubeadm.go:403] duration metric: took 21.647391653s to StartCluster
	I1123 10:11:25.743058  516347 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:25.743120  516347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:11:25.744547  516347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:11:25.744765  516347 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:11:25.744844  516347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:11:25.745070  516347 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:11:25.745101  516347 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:11:25.745156  516347 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-566990"
	I1123 10:11:25.745169  516347 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-566990"
	I1123 10:11:25.745189  516347 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:11:25.745930  516347 addons.go:70] Setting default-storageclass=true in profile "embed-certs-566990"
	I1123 10:11:25.745956  516347 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-566990"
	I1123 10:11:25.746025  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:11:25.746250  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:11:25.749706  516347 out.go:179] * Verifying Kubernetes components...
	I1123 10:11:25.757891  516347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:11:25.782262  516347 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:11:25.789653  516347 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:11:25.789676  516347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:11:25.789740  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:11:25.791439  516347 addons.go:239] Setting addon default-storageclass=true in "embed-certs-566990"
	I1123 10:11:25.791487  516347 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:11:25.791921  516347 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:11:25.828953  516347 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:11:25.828973  516347 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:11:25.829043  516347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:11:25.843158  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:11:25.867977  516347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33481 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:11:26.121551  516347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:11:26.171044  516347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:11:26.171201  516347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:11:26.204251  516347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:11:26.911303  516347 node_ready.go:35] waiting up to 6m0s for node "embed-certs-566990" to be "Ready" ...
	I1123 10:11:26.911691  516347 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 10:11:26.954570  516347 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 10:11:26.558809  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:29.057702  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:26.958699  516347 addons.go:530] duration metric: took 1.21358845s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:11:27.416197  516347 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-566990" context rescaled to 1 replicas
	W1123 10:11:28.914292  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:30.914682  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:31.558229  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:34.057435  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	W1123 10:11:33.414717  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:35.914248  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:36.057663  514436 pod_ready.go:104] pod "coredns-66bc5c9577-v59bz" is not "Ready", error: <nil>
	I1123 10:11:38.557764  514436 pod_ready.go:94] pod "coredns-66bc5c9577-v59bz" is "Ready"
	I1123 10:11:38.557796  514436 pod_ready.go:86] duration metric: took 41.005814081s for pod "coredns-66bc5c9577-v59bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.560510  514436 pod_ready.go:83] waiting for pod "etcd-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.565230  514436 pod_ready.go:94] pod "etcd-no-preload-020224" is "Ready"
	I1123 10:11:38.565260  514436 pod_ready.go:86] duration metric: took 4.728357ms for pod "etcd-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.568295  514436 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.573206  514436 pod_ready.go:94] pod "kube-apiserver-no-preload-020224" is "Ready"
	I1123 10:11:38.573235  514436 pod_ready.go:86] duration metric: took 4.912361ms for pod "kube-apiserver-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.575569  514436 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.756514  514436 pod_ready.go:94] pod "kube-controller-manager-no-preload-020224" is "Ready"
	I1123 10:11:38.756542  514436 pod_ready.go:86] duration metric: took 180.943339ms for pod "kube-controller-manager-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:38.956485  514436 pod_ready.go:83] waiting for pod "kube-proxy-7s6pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:39.356167  514436 pod_ready.go:94] pod "kube-proxy-7s6pf" is "Ready"
	I1123 10:11:39.356192  514436 pod_ready.go:86] duration metric: took 399.68094ms for pod "kube-proxy-7s6pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:39.555948  514436 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:39.956235  514436 pod_ready.go:94] pod "kube-scheduler-no-preload-020224" is "Ready"
	I1123 10:11:39.956306  514436 pod_ready.go:86] duration metric: took 400.328671ms for pod "kube-scheduler-no-preload-020224" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:11:39.956339  514436 pod_ready.go:40] duration metric: took 42.407757118s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:11:40.023535  514436 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:11:40.026595  514436 out.go:179] * Done! kubectl is now configured to use "no-preload-020224" cluster and "default" namespace by default
	W1123 10:11:38.414203  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:40.420494  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:42.914931  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:45.414536  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:47.414639  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:49.915221  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:52.414847  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:54.914508  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:11:56.914795  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.910360542Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ecd6e3ae-be8c-466e-9150-f860a4354e09 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.919698388Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3d9dc970-58dc-4754-b65b-b470a89a9402 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.922954369Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr/dashboard-metrics-scraper" id=4460bc5a-ce9d-48af-a5c1-38cd194f6e3d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.923085973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.930327308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.932924283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.952120706Z" level=info msg="Created container 35353a7fdcee3b1c11f15830f6a44f93a5172d7e420261be0ec4df87bb885de5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr/dashboard-metrics-scraper" id=4460bc5a-ce9d-48af-a5c1-38cd194f6e3d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.953273751Z" level=info msg="Starting container: 35353a7fdcee3b1c11f15830f6a44f93a5172d7e420261be0ec4df87bb885de5" id=c607c7f6-907d-4c39-af0e-bd85aae39eb3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:11:34 no-preload-020224 crio[655]: time="2025-11-23T10:11:34.957794203Z" level=info msg="Started container" PID=1638 containerID=35353a7fdcee3b1c11f15830f6a44f93a5172d7e420261be0ec4df87bb885de5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr/dashboard-metrics-scraper id=c607c7f6-907d-4c39-af0e-bd85aae39eb3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7fa17977110b5b37793311c20bd86ce823f50ad184358b8152f76df229afe738
	Nov 23 10:11:34 no-preload-020224 conmon[1636]: conmon 35353a7fdcee3b1c11f1 <ninfo>: container 1638 exited with status 1
	Nov 23 10:11:35 no-preload-020224 crio[655]: time="2025-11-23T10:11:35.26703613Z" level=info msg="Removing container: 413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae" id=66925551-8142-4507-8cda-a7747be5e643 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:11:35 no-preload-020224 crio[655]: time="2025-11-23T10:11:35.278861664Z" level=info msg="Error loading conmon cgroup of container 413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae: cgroup deleted" id=66925551-8142-4507-8cda-a7747be5e643 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:11:35 no-preload-020224 crio[655]: time="2025-11-23T10:11:35.289659736Z" level=info msg="Removed container 413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr/dashboard-metrics-scraper" id=66925551-8142-4507-8cda-a7747be5e643 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.163825138Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.171060418Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.171096217Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.171121374Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.174358409Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.17455298Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.174665557Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.177616638Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.177647875Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.177671596Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.181332359Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:11:37 no-preload-020224 crio[655]: time="2025-11-23T10:11:37.18136614Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	35353a7fdcee3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   7fa17977110b5       dashboard-metrics-scraper-6ffb444bf9-j5kdr   kubernetes-dashboard
	7f683b37fb2e2       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           30 seconds ago       Running             storage-provisioner         2                   d682d5b88afbc       storage-provisioner                          kube-system
	a1167a40e0a49       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   51 seconds ago       Running             kubernetes-dashboard        0                   27d51be8520fe       kubernetes-dashboard-855c9754f9-n54fr        kubernetes-dashboard
	b70888786109f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   166e37624eae1       coredns-66bc5c9577-v59bz                     kube-system
	61ca8c529c76f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   2ea1e4656660b       busybox                                      default
	68f6227f68631       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   504da6166ae02       kube-proxy-7s6pf                             kube-system
	e17889e3bbe35       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           About a minute ago   Exited              storage-provisioner         1                   d682d5b88afbc       storage-provisioner                          kube-system
	bafae4c509b34       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   a8364799eb611       kindnet-ghq9t                                kube-system
	fde673b61a03b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   f7226d8aa1d36       kube-apiserver-no-preload-020224             kube-system
	e20d0c00b09f6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6287c82f233a1       kube-controller-manager-no-preload-020224    kube-system
	cc22d1a213207       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   49be46b129576       kube-scheduler-no-preload-020224             kube-system
	ec9f0e1b62e29       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   9f6cca0e95669       etcd-no-preload-020224                       kube-system
	
	
	==> coredns [b70888786109ff5bcd4b3c55c8ff29deccf75501effb0a21482fde850addde12] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57967 - 48206 "HINFO IN 7540399292455442044.878110780711058301. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023046494s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-020224
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-020224
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=no-preload-020224
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_09_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:09:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-020224
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:11:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:11:26 +0000   Sun, 23 Nov 2025 10:09:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:11:26 +0000   Sun, 23 Nov 2025 10:09:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:11:26 +0000   Sun, 23 Nov 2025 10:09:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:11:26 +0000   Sun, 23 Nov 2025 10:10:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-020224
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                57e370ae-7663-48e3-a7c6-52885f59b718
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 coredns-66bc5c9577-v59bz                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m3s
	  kube-system                 etcd-no-preload-020224                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m9s
	  kube-system                 kindnet-ghq9t                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-no-preload-020224              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-no-preload-020224     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-7s6pf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-no-preload-020224              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-j5kdr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-n54fr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m1s                   kube-proxy       
	  Normal   Starting                 60s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node no-preload-020224 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node no-preload-020224 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m19s (x8 over 2m19s)  kubelet          Node no-preload-020224 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m9s                   kubelet          Node no-preload-020224 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m9s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m9s                   kubelet          Node no-preload-020224 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m9s                   kubelet          Node no-preload-020224 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m9s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m4s                   node-controller  Node no-preload-020224 event: Registered Node no-preload-020224 in Controller
	  Normal   NodeReady                107s                   kubelet          Node no-preload-020224 status is now: NodeReady
	  Normal   Starting                 71s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 71s)      kubelet          Node no-preload-020224 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 71s)      kubelet          Node no-preload-020224 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 71s)      kubelet          Node no-preload-020224 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                    node-controller  Node no-preload-020224 event: Registered Node no-preload-020224 in Controller
	
	
	==> dmesg <==
	[Nov23 09:47] overlayfs: idmapped layers are currently not supported
	[ +12.563591] hrtimer: interrupt took 4093727 ns
	[ +14.190024] overlayfs: idmapped layers are currently not supported
	[Nov23 09:49] overlayfs: idmapped layers are currently not supported
	[Nov23 09:50] overlayfs: idmapped layers are currently not supported
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ec9f0e1b62e29a096907f8e55276c570c3b3ba64c77efeee2ee959d0dec0f641] <==
	{"level":"warn","ts":"2025-11-23T10:10:53.573607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.590293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.603920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.620147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.634902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.649731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.664870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.680367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.723653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.732927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.757037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.774625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.794426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.821461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.842307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.858682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.877657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.900432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.942242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.956737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.983955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:53.990142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:54.024886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:54.071408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:10:54.120476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53538","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:11:58 up  2:54,  0 user,  load average: 3.86, 4.40, 3.45
	Linux no-preload-020224 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bafae4c509b34428ee8a90309affc818c464f230a656437d45944bad64ebec14] <==
	I1123 10:10:56.948131       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:10:56.957788       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 10:10:56.957942       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:10:56.957955       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:10:56.957970       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:10:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:10:57.163006       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:10:57.163043       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:10:57.163052       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:10:57.163682       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:11:27.163096       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 10:11:27.163361       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:11:27.163470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 10:11:27.164086       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 10:11:28.763423       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:11:28.763460       1 metrics.go:72] Registering metrics
	I1123 10:11:28.763530       1 controller.go:711] "Syncing nftables rules"
	I1123 10:11:37.163517       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:11:37.163567       1 main.go:301] handling current node
	I1123 10:11:47.162625       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:11:47.162660       1 main.go:301] handling current node
	I1123 10:11:57.163287       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:11:57.163317       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fde673b61a03b720e2492ddf051014b494251142f14b9bcf92cb9b5416dc9304] <==
	I1123 10:10:55.657998       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 10:10:55.661587       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:10:55.671100       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 10:10:55.671173       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 10:10:55.671189       1 policy_source.go:240] refreshing policies
	I1123 10:10:55.671298       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:10:55.671308       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:10:55.671314       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:10:55.671320       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:10:55.706174       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:10:55.712704       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 10:10:55.712771       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:10:55.712837       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1123 10:10:55.770618       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 10:10:55.981967       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:10:56.417682       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:10:57.123124       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:10:57.228667       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:10:57.267635       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:10:57.283734       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:10:57.362329       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.105.87"}
	I1123 10:10:57.381807       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.19.184"}
	I1123 10:10:59.467817       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:10:59.521093       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:10:59.811725       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e20d0c00b09f6363ed0697d9006ccbda9d1b29842f0e683983474b226e898361] <==
	I1123 10:10:59.218262       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:10:59.218596       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-020224"
	I1123 10:10:59.218729       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 10:10:59.236291       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:10:59.240418       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:10:59.260317       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:10:59.260595       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:10:59.260670       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:10:59.260727       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 10:10:59.260785       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:10:59.260861       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:10:59.263230       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:10:59.263349       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:10:59.264057       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:10:59.264318       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:10:59.264332       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 10:10:59.264344       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:10:59.271171       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:10:59.274568       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:10:59.281810       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:10:59.282527       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:10:59.284215       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:10:59.295016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:10:59.842632       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1123 10:10:59.842824       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [68f6227f68631f93834013a157602cddcb5a711bae38e8f85120cd85c0718b34] <==
	I1123 10:10:57.180889       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:10:57.311266       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:10:57.413956       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:10:57.414008       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 10:10:57.414094       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:10:57.443332       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:10:57.443451       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:10:57.447699       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:10:57.448092       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:10:57.448311       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:10:57.449876       1 config.go:200] "Starting service config controller"
	I1123 10:10:57.449944       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:10:57.449987       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:10:57.450021       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:10:57.450068       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:10:57.450095       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:10:57.450716       1 config.go:309] "Starting node config controller"
	I1123 10:10:57.453315       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:10:57.453391       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:10:57.551138       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:10:57.551176       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:10:57.551220       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cc22d1a213207a0fdc2938062c1f1f5505506d20a750b9890fa2b63926bbbfa7] <==
	I1123 10:10:50.851094       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:10:55.509904       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:10:55.510012       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:10:55.510046       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:10:55.510087       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:10:55.646314       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:10:55.646345       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:10:55.661828       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:10:55.661933       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:10:55.661979       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:10:55.661995       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:10:55.762561       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:10:56 no-preload-020224 kubelet[781]: W1123 10:10:56.442847     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/crio-504da6166ae026de6e6cdacb49e0c316669e161270dbbfb4f0debc8077dce31b WatchSource:0}: Error finding container 504da6166ae026de6e6cdacb49e0c316669e161270dbbfb4f0debc8077dce31b: Status 404 returned error can't find the container with id 504da6166ae026de6e6cdacb49e0c316669e161270dbbfb4f0debc8077dce31b
	Nov 23 10:10:56 no-preload-020224 kubelet[781]: W1123 10:10:56.443406     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/18d5b0a18428445a305aec0729815f364f74be2f78b2db22b50b3f92ea1c69e0/crio-2ea1e4656660b9e009a8ab68c677aa129f9147c619dbc4c47c4d7b691d4d6a6e WatchSource:0}: Error finding container 2ea1e4656660b9e009a8ab68c677aa129f9147c619dbc4c47c4d7b691d4d6a6e: Status 404 returned error can't find the container with id 2ea1e4656660b9e009a8ab68c677aa129f9147c619dbc4c47c4d7b691d4d6a6e
	Nov 23 10:10:59 no-preload-020224 kubelet[781]: I1123 10:10:59.933809     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w9zm\" (UniqueName: \"kubernetes.io/projected/c3920bf6-1c4d-4052-b857-79560bb6954b-kube-api-access-8w9zm\") pod \"kubernetes-dashboard-855c9754f9-n54fr\" (UID: \"c3920bf6-1c4d-4052-b857-79560bb6954b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n54fr"
	Nov 23 10:10:59 no-preload-020224 kubelet[781]: I1123 10:10:59.934309     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a90ef1aa-01a6-46d1-bbcc-c09d2e529547-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-j5kdr\" (UID: \"a90ef1aa-01a6-46d1-bbcc-c09d2e529547\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr"
	Nov 23 10:10:59 no-preload-020224 kubelet[781]: I1123 10:10:59.934404     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c3920bf6-1c4d-4052-b857-79560bb6954b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-n54fr\" (UID: \"c3920bf6-1c4d-4052-b857-79560bb6954b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n54fr"
	Nov 23 10:10:59 no-preload-020224 kubelet[781]: I1123 10:10:59.934492     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8229w\" (UniqueName: \"kubernetes.io/projected/a90ef1aa-01a6-46d1-bbcc-c09d2e529547-kube-api-access-8229w\") pod \"dashboard-metrics-scraper-6ffb444bf9-j5kdr\" (UID: \"a90ef1aa-01a6-46d1-bbcc-c09d2e529547\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr"
	Nov 23 10:11:14 no-preload-020224 kubelet[781]: I1123 10:11:14.204763     781 scope.go:117] "RemoveContainer" containerID="05d633285dfade4a0cc3bdec255cf2a35aa20f7d6bced0dabc12c550722f49cc"
	Nov 23 10:11:14 no-preload-020224 kubelet[781]: I1123 10:11:14.230295     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-n54fr" podStartSLOduration=8.524733359 podStartE2EDuration="15.230244893s" podCreationTimestamp="2025-11-23 10:10:59 +0000 UTC" firstStartedPulling="2025-11-23 10:11:00.29981658 +0000 UTC m=+12.606916894" lastFinishedPulling="2025-11-23 10:11:07.005328122 +0000 UTC m=+19.312428428" observedRunningTime="2025-11-23 10:11:07.19900725 +0000 UTC m=+19.506107556" watchObservedRunningTime="2025-11-23 10:11:14.230244893 +0000 UTC m=+26.537345198"
	Nov 23 10:11:15 no-preload-020224 kubelet[781]: I1123 10:11:15.208980     781 scope.go:117] "RemoveContainer" containerID="05d633285dfade4a0cc3bdec255cf2a35aa20f7d6bced0dabc12c550722f49cc"
	Nov 23 10:11:15 no-preload-020224 kubelet[781]: I1123 10:11:15.209311     781 scope.go:117] "RemoveContainer" containerID="413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae"
	Nov 23 10:11:15 no-preload-020224 kubelet[781]: E1123 10:11:15.210247     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j5kdr_kubernetes-dashboard(a90ef1aa-01a6-46d1-bbcc-c09d2e529547)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr" podUID="a90ef1aa-01a6-46d1-bbcc-c09d2e529547"
	Nov 23 10:11:16 no-preload-020224 kubelet[781]: I1123 10:11:16.212774     781 scope.go:117] "RemoveContainer" containerID="413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae"
	Nov 23 10:11:16 no-preload-020224 kubelet[781]: E1123 10:11:16.212973     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j5kdr_kubernetes-dashboard(a90ef1aa-01a6-46d1-bbcc-c09d2e529547)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr" podUID="a90ef1aa-01a6-46d1-bbcc-c09d2e529547"
	Nov 23 10:11:20 no-preload-020224 kubelet[781]: I1123 10:11:20.109516     781 scope.go:117] "RemoveContainer" containerID="413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae"
	Nov 23 10:11:20 no-preload-020224 kubelet[781]: E1123 10:11:20.110175     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j5kdr_kubernetes-dashboard(a90ef1aa-01a6-46d1-bbcc-c09d2e529547)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr" podUID="a90ef1aa-01a6-46d1-bbcc-c09d2e529547"
	Nov 23 10:11:27 no-preload-020224 kubelet[781]: I1123 10:11:27.242241     781 scope.go:117] "RemoveContainer" containerID="e17889e3bbe35de35cd8f26268b7a93a6ea26479b3e8840877416b118ac06f7c"
	Nov 23 10:11:34 no-preload-020224 kubelet[781]: I1123 10:11:34.909825     781 scope.go:117] "RemoveContainer" containerID="413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae"
	Nov 23 10:11:35 no-preload-020224 kubelet[781]: I1123 10:11:35.264723     781 scope.go:117] "RemoveContainer" containerID="413620f72045641113079d4a31f67a7e6fee16a80073eb3040877cd6c11292ae"
	Nov 23 10:11:35 no-preload-020224 kubelet[781]: I1123 10:11:35.265329     781 scope.go:117] "RemoveContainer" containerID="35353a7fdcee3b1c11f15830f6a44f93a5172d7e420261be0ec4df87bb885de5"
	Nov 23 10:11:35 no-preload-020224 kubelet[781]: E1123 10:11:35.265539     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j5kdr_kubernetes-dashboard(a90ef1aa-01a6-46d1-bbcc-c09d2e529547)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr" podUID="a90ef1aa-01a6-46d1-bbcc-c09d2e529547"
	Nov 23 10:11:40 no-preload-020224 kubelet[781]: I1123 10:11:40.112474     781 scope.go:117] "RemoveContainer" containerID="35353a7fdcee3b1c11f15830f6a44f93a5172d7e420261be0ec4df87bb885de5"
	Nov 23 10:11:40 no-preload-020224 kubelet[781]: E1123 10:11:40.112665     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-j5kdr_kubernetes-dashboard(a90ef1aa-01a6-46d1-bbcc-c09d2e529547)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-j5kdr" podUID="a90ef1aa-01a6-46d1-bbcc-c09d2e529547"
	Nov 23 10:11:53 no-preload-020224 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:11:53 no-preload-020224 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:11:53 no-preload-020224 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a1167a40e0a499dbfedc0b42aed6734a35d1bc25e255a90cfffe0a0e7023eb30] <==
	2025/11/23 10:11:07 Starting overwatch
	2025/11/23 10:11:07 Using namespace: kubernetes-dashboard
	2025/11/23 10:11:07 Using in-cluster config to connect to apiserver
	2025/11/23 10:11:07 Using secret token for csrf signing
	2025/11/23 10:11:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:11:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:11:07 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 10:11:07 Generating JWE encryption key
	2025/11/23 10:11:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:11:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:11:07 Initializing JWE encryption key from synchronized object
	2025/11/23 10:11:07 Creating in-cluster Sidecar client
	2025/11/23 10:11:07 Serving insecurely on HTTP port: 9090
	2025/11/23 10:11:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:11:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7f683b37fb2e222c6e33a53d3dd7bc514b2b5e218719c2446f59bd1db11e26f1] <==
	W1123 10:11:27.314030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:30.768779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:35.030726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:38.629286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:41.683355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:44.705538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:44.712683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:11:44.713058       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:11:44.713243       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-020224_fccf3123-3411-4345-ac68-e05ed271f5f5!
	I1123 10:11:44.714132       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"160d8384-48d9-41be-8c08-06b5acefeeea", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-020224_fccf3123-3411-4345-ac68-e05ed271f5f5 became leader
	W1123 10:11:44.719670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:44.725358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:11:44.814215       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-020224_fccf3123-3411-4345-ac68-e05ed271f5f5!
	W1123 10:11:46.728409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:46.735142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:48.739155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:48.743412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:50.747142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:50.754859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:52.758699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:52.765098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:54.769293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:54.776065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:56.781691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:11:56.790010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e17889e3bbe35de35cd8f26268b7a93a6ea26479b3e8840877416b118ac06f7c] <==
	I1123 10:10:57.034067       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:11:27.036210       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-020224 -n no-preload-020224
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-020224 -n no-preload-020224: exit status 2 (373.709979ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-020224 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-566990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-566990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (288.096198ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:12:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-566990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-566990 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-566990 describe deploy/metrics-server -n kube-system: exit status 1 (110.122179ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-566990 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-566990
helpers_test.go:243: (dbg) docker inspect embed-certs-566990:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086",
	        "Created": "2025-11-23T10:10:53.870240419Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 517245,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:10:53.933155511Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/hosts",
	        "LogPath": "/var/lib/docker/containers/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086-json.log",
	        "Name": "/embed-certs-566990",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-566990:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-566990",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086",
	                "LowerDir": "/var/lib/docker/overlay2/574f481259594912a40868acf264102260539315df15d075ad880cdeae35844b-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/574f481259594912a40868acf264102260539315df15d075ad880cdeae35844b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/574f481259594912a40868acf264102260539315df15d075ad880cdeae35844b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/574f481259594912a40868acf264102260539315df15d075ad880cdeae35844b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-566990",
	                "Source": "/var/lib/docker/volumes/embed-certs-566990/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-566990",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-566990",
	                "name.minikube.sigs.k8s.io": "embed-certs-566990",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "83c69f81fe112af79e367cd16f4b1b7ef2b6acaa2882ccdeffcb05acd20e773f",
	            "SandboxKey": "/var/run/docker/netns/83c69f81fe11",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-566990": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:d1:48:84:c6:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d564915410215420da3cf47698d0501dfe2d9ab80cfbf8100f70d4be821f6796",
	                    "EndpointID": "c48b192543f6870b4bf017dcfdedd4a76098db511f70653ebd4a5ed1aafe1916",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-566990",
	                        "8f6ca1334711"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-566990 -n embed-certs-566990
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-566990 logs -n 25
E1123 10:12:20.372276  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-566990 logs -n 25: (1.575016359s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p calico-507563 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ calico-507563                │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ calico-507563                │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ ssh     │ -p calico-507563 sudo crio config                                                                                                                                                                                                             │ calico-507563                │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:08 UTC │
	│ delete  │ -p calico-507563                                                                                                                                                                                                                              │ calico-507563                │ jenkins │ v1.37.0 │ 23 Nov 25 10:08 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	│ stop    │ -p old-k8s-version-706028 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-706028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p old-k8s-version-706028 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-020224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ stop    │ -p no-preload-020224 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ image   │ old-k8s-version-706028 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ pause   │ -p old-k8s-version-706028 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020224 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:11 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:12 UTC │
	│ image   │ no-preload-020224 image list --format=json                                                                                                                                                                                                    │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:11 UTC │
	│ pause   │ -p no-preload-020224 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │                     │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p disable-driver-mounts-097888                                                                                                                                                                                                               │ disable-driver-mounts-097888 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-566990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:12:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:12:02.767648  521335 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:12:02.767838  521335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:12:02.767868  521335 out.go:374] Setting ErrFile to fd 2...
	I1123 10:12:02.767887  521335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:12:02.768671  521335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:12:02.769150  521335 out.go:368] Setting JSON to false
	I1123 10:12:02.770179  521335 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10472,"bootTime":1763882251,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:12:02.770250  521335 start.go:143] virtualization:  
	I1123 10:12:02.774265  521335 out.go:179] * [default-k8s-diff-port-330197] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:12:02.778279  521335 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:12:02.778410  521335 notify.go:221] Checking for updates...
	I1123 10:12:02.784632  521335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:12:02.787716  521335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:12:02.790767  521335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:12:02.793667  521335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:12:02.796652  521335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:12:02.800196  521335 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:02.800296  521335 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:12:02.824228  521335 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:12:02.824356  521335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:12:02.881013  521335 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:12:02.871785287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:12:02.881179  521335 docker.go:319] overlay module found
	I1123 10:12:02.884294  521335 out.go:179] * Using the docker driver based on user configuration
	I1123 10:12:02.887236  521335 start.go:309] selected driver: docker
	I1123 10:12:02.887258  521335 start.go:927] validating driver "docker" against <nil>
	I1123 10:12:02.887272  521335 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:12:02.888014  521335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:12:02.960269  521335 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:12:02.951664335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:12:02.960427  521335 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:12:02.960650  521335 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:12:02.963579  521335 out.go:179] * Using Docker driver with root privileges
	I1123 10:12:02.966479  521335 cni.go:84] Creating CNI manager for ""
	I1123 10:12:02.966578  521335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:12:02.966595  521335 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:12:02.966683  521335 start.go:353] cluster config:
	{Name:default-k8s-diff-port-330197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-330197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:12:02.969807  521335 out.go:179] * Starting "default-k8s-diff-port-330197" primary control-plane node in "default-k8s-diff-port-330197" cluster
	I1123 10:12:02.972643  521335 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:12:02.975612  521335 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:12:02.978497  521335 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:12:02.978542  521335 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:12:02.978553  521335 cache.go:65] Caching tarball of preloaded images
	I1123 10:12:02.978574  521335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:12:02.978635  521335 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:12:02.978651  521335 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:12:02.978753  521335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/config.json ...
	I1123 10:12:02.978773  521335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/config.json: {Name:mk367af7b0a65a94d499609cca4159c0f5d20ff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:02.996843  521335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:12:02.996869  521335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:12:02.996889  521335 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:12:02.996920  521335 start.go:360] acquireMachinesLock for default-k8s-diff-port-330197: {Name:mke95bbd84696d9268c86469759951e95b68110b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:12:02.997036  521335 start.go:364] duration metric: took 94.623µs to acquireMachinesLock for "default-k8s-diff-port-330197"
	I1123 10:12:02.997069  521335 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-330197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-330197 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:12:02.997140  521335 start.go:125] createHost starting for "" (driver="docker")
	W1123 10:12:04.418660  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	W1123 10:12:06.914930  516347 node_ready.go:57] node "embed-certs-566990" has "Ready":"False" status (will retry)
	I1123 10:12:03.000653  521335 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:12:03.000892  521335 start.go:159] libmachine.API.Create for "default-k8s-diff-port-330197" (driver="docker")
	I1123 10:12:03.000939  521335 client.go:173] LocalClient.Create starting
	I1123 10:12:03.001037  521335 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem
	I1123 10:12:03.001079  521335 main.go:143] libmachine: Decoding PEM data...
	I1123 10:12:03.001096  521335 main.go:143] libmachine: Parsing certificate...
	I1123 10:12:03.001164  521335 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem
	I1123 10:12:03.001193  521335 main.go:143] libmachine: Decoding PEM data...
	I1123 10:12:03.001209  521335 main.go:143] libmachine: Parsing certificate...
	I1123 10:12:03.001700  521335 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-330197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:12:03.018916  521335 cli_runner.go:211] docker network inspect default-k8s-diff-port-330197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:12:03.019088  521335 network_create.go:284] running [docker network inspect default-k8s-diff-port-330197] to gather additional debugging logs...
	I1123 10:12:03.019108  521335 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-330197
	W1123 10:12:03.041282  521335 cli_runner.go:211] docker network inspect default-k8s-diff-port-330197 returned with exit code 1
	I1123 10:12:03.041335  521335 network_create.go:287] error running [docker network inspect default-k8s-diff-port-330197]: docker network inspect default-k8s-diff-port-330197: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-330197 not found
	I1123 10:12:03.041368  521335 network_create.go:289] output of [docker network inspect default-k8s-diff-port-330197]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-330197 not found
	
	** /stderr **
	I1123 10:12:03.041594  521335 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:12:03.058512  521335 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d56166f18c3a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:f2:0f:1a:18:9c} reservation:<nil>}
	I1123 10:12:03.058866  521335 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe6f7fd59576 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:8b:f7:8e:2b:59} reservation:<nil>}
	I1123 10:12:03.059113  521335 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c262e08021b1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:16:63:f0:32:b6} reservation:<nil>}
	I1123 10:12:03.059448  521335 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d56491541021 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:fa:0b:84:51:6e:be} reservation:<nil>}
	I1123 10:12:03.059895  521335 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a160b0}
	I1123 10:12:03.059920  521335 network_create.go:124] attempt to create docker network default-k8s-diff-port-330197 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 10:12:03.059975  521335 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-330197 default-k8s-diff-port-330197
	I1123 10:12:03.124643  521335 network_create.go:108] docker network default-k8s-diff-port-330197 192.168.85.0/24 created
	I1123 10:12:03.124679  521335 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-330197" container
	I1123 10:12:03.124749  521335 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:12:03.142553  521335 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-330197 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-330197 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:12:03.161206  521335 oci.go:103] Successfully created a docker volume default-k8s-diff-port-330197
	I1123 10:12:03.161315  521335 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-330197-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-330197 --entrypoint /usr/bin/test -v default-k8s-diff-port-330197:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:12:03.722459  521335 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-330197
	I1123 10:12:03.722537  521335 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:12:03.722562  521335 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:12:03.722680  521335 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-330197:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:12:07.915289  516347 node_ready.go:49] node "embed-certs-566990" is "Ready"
	I1123 10:12:07.915322  516347 node_ready.go:38] duration metric: took 41.003932928s for node "embed-certs-566990" to be "Ready" ...
	I1123 10:12:07.915337  516347 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:12:07.915397  516347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:12:07.927352  516347 api_server.go:72] duration metric: took 42.182557597s to wait for apiserver process to appear ...
	I1123 10:12:07.927379  516347 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:12:07.927403  516347 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:12:07.936853  516347 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:12:07.937993  516347 api_server.go:141] control plane version: v1.34.1
	I1123 10:12:07.938020  516347 api_server.go:131] duration metric: took 10.633668ms to wait for apiserver health ...
	I1123 10:12:07.938030  516347 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:12:07.941148  516347 system_pods.go:59] 8 kube-system pods found
	I1123 10:12:07.941186  516347 system_pods.go:61] "coredns-66bc5c9577-d8sh7" [737943ee-552c-4a07-aa55-978b687c5b59] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:12:07.941194  516347 system_pods.go:61] "etcd-embed-certs-566990" [020c43bd-55e0-40c2-8119-1370611def91] Running
	I1123 10:12:07.941199  516347 system_pods.go:61] "kindnet-p6kh4" [0047dc25-c013-471f-89b6-22e1399e2dc9] Running
	I1123 10:12:07.941204  516347 system_pods.go:61] "kube-apiserver-embed-certs-566990" [cdc3c57e-a09e-45f0-85f3-865174df4118] Running
	I1123 10:12:07.941208  516347 system_pods.go:61] "kube-controller-manager-embed-certs-566990" [8057de0a-12ee-4d41-8535-b1b4db1c022e] Running
	I1123 10:12:07.941212  516347 system_pods.go:61] "kube-proxy-k4lvf" [88d44863-5a0e-44f5-9806-2e6e769dc05b] Running
	I1123 10:12:07.941215  516347 system_pods.go:61] "kube-scheduler-embed-certs-566990" [63997f1e-1056-4acd-a564-a8fddff7356f] Running
	I1123 10:12:07.941222  516347 system_pods.go:61] "storage-provisioner" [9f1e25da-6804-44f0-aa70-5ff52015cd12] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:12:07.941233  516347 system_pods.go:74] duration metric: took 3.196639ms to wait for pod list to return data ...
	I1123 10:12:07.941246  516347 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:12:07.943882  516347 default_sa.go:45] found service account: "default"
	I1123 10:12:07.943906  516347 default_sa.go:55] duration metric: took 2.653328ms for default service account to be created ...
	I1123 10:12:07.943916  516347 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:12:07.946852  516347 system_pods.go:86] 8 kube-system pods found
	I1123 10:12:07.946885  516347 system_pods.go:89] "coredns-66bc5c9577-d8sh7" [737943ee-552c-4a07-aa55-978b687c5b59] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:12:07.946892  516347 system_pods.go:89] "etcd-embed-certs-566990" [020c43bd-55e0-40c2-8119-1370611def91] Running
	I1123 10:12:07.946898  516347 system_pods.go:89] "kindnet-p6kh4" [0047dc25-c013-471f-89b6-22e1399e2dc9] Running
	I1123 10:12:07.946902  516347 system_pods.go:89] "kube-apiserver-embed-certs-566990" [cdc3c57e-a09e-45f0-85f3-865174df4118] Running
	I1123 10:12:07.946907  516347 system_pods.go:89] "kube-controller-manager-embed-certs-566990" [8057de0a-12ee-4d41-8535-b1b4db1c022e] Running
	I1123 10:12:07.946911  516347 system_pods.go:89] "kube-proxy-k4lvf" [88d44863-5a0e-44f5-9806-2e6e769dc05b] Running
	I1123 10:12:07.946915  516347 system_pods.go:89] "kube-scheduler-embed-certs-566990" [63997f1e-1056-4acd-a564-a8fddff7356f] Running
	I1123 10:12:07.946921  516347 system_pods.go:89] "storage-provisioner" [9f1e25da-6804-44f0-aa70-5ff52015cd12] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:12:07.946941  516347 retry.go:31] will retry after 288.298624ms: missing components: kube-dns
	I1123 10:12:08.252494  516347 system_pods.go:86] 8 kube-system pods found
	I1123 10:12:08.252544  516347 system_pods.go:89] "coredns-66bc5c9577-d8sh7" [737943ee-552c-4a07-aa55-978b687c5b59] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:12:08.252551  516347 system_pods.go:89] "etcd-embed-certs-566990" [020c43bd-55e0-40c2-8119-1370611def91] Running
	I1123 10:12:08.252558  516347 system_pods.go:89] "kindnet-p6kh4" [0047dc25-c013-471f-89b6-22e1399e2dc9] Running
	I1123 10:12:08.252562  516347 system_pods.go:89] "kube-apiserver-embed-certs-566990" [cdc3c57e-a09e-45f0-85f3-865174df4118] Running
	I1123 10:12:08.252567  516347 system_pods.go:89] "kube-controller-manager-embed-certs-566990" [8057de0a-12ee-4d41-8535-b1b4db1c022e] Running
	I1123 10:12:08.252571  516347 system_pods.go:89] "kube-proxy-k4lvf" [88d44863-5a0e-44f5-9806-2e6e769dc05b] Running
	I1123 10:12:08.252575  516347 system_pods.go:89] "kube-scheduler-embed-certs-566990" [63997f1e-1056-4acd-a564-a8fddff7356f] Running
	I1123 10:12:08.252582  516347 system_pods.go:89] "storage-provisioner" [9f1e25da-6804-44f0-aa70-5ff52015cd12] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:12:08.252599  516347 retry.go:31] will retry after 259.741032ms: missing components: kube-dns
	I1123 10:12:08.516443  516347 system_pods.go:86] 8 kube-system pods found
	I1123 10:12:08.516485  516347 system_pods.go:89] "coredns-66bc5c9577-d8sh7" [737943ee-552c-4a07-aa55-978b687c5b59] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:12:08.516493  516347 system_pods.go:89] "etcd-embed-certs-566990" [020c43bd-55e0-40c2-8119-1370611def91] Running
	I1123 10:12:08.516499  516347 system_pods.go:89] "kindnet-p6kh4" [0047dc25-c013-471f-89b6-22e1399e2dc9] Running
	I1123 10:12:08.516503  516347 system_pods.go:89] "kube-apiserver-embed-certs-566990" [cdc3c57e-a09e-45f0-85f3-865174df4118] Running
	I1123 10:12:08.516508  516347 system_pods.go:89] "kube-controller-manager-embed-certs-566990" [8057de0a-12ee-4d41-8535-b1b4db1c022e] Running
	I1123 10:12:08.516512  516347 system_pods.go:89] "kube-proxy-k4lvf" [88d44863-5a0e-44f5-9806-2e6e769dc05b] Running
	I1123 10:12:08.516516  516347 system_pods.go:89] "kube-scheduler-embed-certs-566990" [63997f1e-1056-4acd-a564-a8fddff7356f] Running
	I1123 10:12:08.516522  516347 system_pods.go:89] "storage-provisioner" [9f1e25da-6804-44f0-aa70-5ff52015cd12] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:12:08.516536  516347 retry.go:31] will retry after 365.286953ms: missing components: kube-dns
	I1123 10:12:08.885920  516347 system_pods.go:86] 8 kube-system pods found
	I1123 10:12:08.885955  516347 system_pods.go:89] "coredns-66bc5c9577-d8sh7" [737943ee-552c-4a07-aa55-978b687c5b59] Running
	I1123 10:12:08.885962  516347 system_pods.go:89] "etcd-embed-certs-566990" [020c43bd-55e0-40c2-8119-1370611def91] Running
	I1123 10:12:08.885967  516347 system_pods.go:89] "kindnet-p6kh4" [0047dc25-c013-471f-89b6-22e1399e2dc9] Running
	I1123 10:12:08.885972  516347 system_pods.go:89] "kube-apiserver-embed-certs-566990" [cdc3c57e-a09e-45f0-85f3-865174df4118] Running
	I1123 10:12:08.885978  516347 system_pods.go:89] "kube-controller-manager-embed-certs-566990" [8057de0a-12ee-4d41-8535-b1b4db1c022e] Running
	I1123 10:12:08.885987  516347 system_pods.go:89] "kube-proxy-k4lvf" [88d44863-5a0e-44f5-9806-2e6e769dc05b] Running
	I1123 10:12:08.885992  516347 system_pods.go:89] "kube-scheduler-embed-certs-566990" [63997f1e-1056-4acd-a564-a8fddff7356f] Running
	I1123 10:12:08.886001  516347 system_pods.go:89] "storage-provisioner" [9f1e25da-6804-44f0-aa70-5ff52015cd12] Running
	I1123 10:12:08.886010  516347 system_pods.go:126] duration metric: took 942.088099ms to wait for k8s-apps to be running ...
	I1123 10:12:08.886024  516347 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:12:08.886077  516347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:12:08.904185  516347 system_svc.go:56] duration metric: took 18.151864ms WaitForService to wait for kubelet
	I1123 10:12:08.904224  516347 kubeadm.go:587] duration metric: took 43.15943502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:12:08.904253  516347 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:12:08.916542  516347 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:12:08.916580  516347 node_conditions.go:123] node cpu capacity is 2
	I1123 10:12:08.916594  516347 node_conditions.go:105] duration metric: took 12.335456ms to run NodePressure ...
	I1123 10:12:08.916608  516347 start.go:242] waiting for startup goroutines ...
	I1123 10:12:08.916616  516347 start.go:247] waiting for cluster config update ...
	I1123 10:12:08.916631  516347 start.go:256] writing updated cluster config ...
	I1123 10:12:08.917172  516347 ssh_runner.go:195] Run: rm -f paused
	I1123 10:12:08.922630  516347 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:12:08.927412  516347 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d8sh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:12:08.935453  516347 pod_ready.go:94] pod "coredns-66bc5c9577-d8sh7" is "Ready"
	I1123 10:12:08.935482  516347 pod_ready.go:86] duration metric: took 8.04028ms for pod "coredns-66bc5c9577-d8sh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:12:08.939724  516347 pod_ready.go:83] waiting for pod "etcd-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:12:08.959517  516347 pod_ready.go:94] pod "etcd-embed-certs-566990" is "Ready"
	I1123 10:12:08.959547  516347 pod_ready.go:86] duration metric: took 19.793171ms for pod "etcd-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:12:08.970675  516347 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:12:08.992189  516347 pod_ready.go:94] pod "kube-apiserver-embed-certs-566990" is "Ready"
	I1123 10:12:08.992221  516347 pod_ready.go:86] duration metric: took 21.513955ms for pod "kube-apiserver-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:12:09.010672  516347 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:12:09.328269  516347 pod_ready.go:94] pod "kube-controller-manager-embed-certs-566990" is "Ready"
	I1123 10:12:09.328293  516347 pod_ready.go:86] duration metric: took 317.595095ms for pod "kube-controller-manager-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:12:09.528595  516347 pod_ready.go:83] waiting for pod "kube-proxy-k4lvf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:12:09.928586  516347 pod_ready.go:94] pod "kube-proxy-k4lvf" is "Ready"
	I1123 10:12:09.928616  516347 pod_ready.go:86] duration metric: took 399.998334ms for pod "kube-proxy-k4lvf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:12:10.128150  516347 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:12:10.527972  516347 pod_ready.go:94] pod "kube-scheduler-embed-certs-566990" is "Ready"
	I1123 10:12:10.528003  516347 pod_ready.go:86] duration metric: took 399.789467ms for pod "kube-scheduler-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:12:10.528017  516347 pod_ready.go:40] duration metric: took 1.605351541s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:12:10.587007  516347 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:12:10.590392  516347 out.go:179] * Done! kubectl is now configured to use "embed-certs-566990" cluster and "default" namespace by default
	I1123 10:12:08.129645  521335 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-330197:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.406902509s)
	I1123 10:12:08.129687  521335 kic.go:203] duration metric: took 4.407124823s to extract preloaded images to volume ...
	W1123 10:12:08.129821  521335 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 10:12:08.130006  521335 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:12:08.238571  521335 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-330197 --name default-k8s-diff-port-330197 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-330197 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-330197 --network default-k8s-diff-port-330197 --ip 192.168.85.2 --volume default-k8s-diff-port-330197:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:12:08.569978  521335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Running}}
	I1123 10:12:08.597512  521335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:12:08.623408  521335 cli_runner.go:164] Run: docker exec default-k8s-diff-port-330197 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:12:08.681355  521335 oci.go:144] the created container "default-k8s-diff-port-330197" has a running status.
	I1123 10:12:08.681383  521335 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa...
	I1123 10:12:09.457461  521335 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:12:09.478404  521335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:12:09.495416  521335 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:12:09.495440  521335 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-330197 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:12:09.543756  521335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:12:09.564243  521335 machine.go:94] provisionDockerMachine start ...
	I1123 10:12:09.564348  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:09.582999  521335 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:09.583360  521335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33486 <nil> <nil>}
	I1123 10:12:09.583376  521335 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:12:09.584017  521335 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:12:12.737059  521335 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-330197
	
	I1123 10:12:12.737086  521335 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-330197"
	I1123 10:12:12.737160  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:12.754942  521335 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:12.755288  521335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33486 <nil> <nil>}
	I1123 10:12:12.755305  521335 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-330197 && echo "default-k8s-diff-port-330197" | sudo tee /etc/hostname
	I1123 10:12:12.922638  521335 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-330197
	
	I1123 10:12:12.922751  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:12.942674  521335 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:12.942994  521335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33486 <nil> <nil>}
	I1123 10:12:12.943011  521335 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-330197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-330197/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-330197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:12:13.110257  521335 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:12:13.110286  521335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:12:13.110331  521335 ubuntu.go:190] setting up certificates
	I1123 10:12:13.110348  521335 provision.go:84] configureAuth start
	I1123 10:12:13.110427  521335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-330197
	I1123 10:12:13.130588  521335 provision.go:143] copyHostCerts
	I1123 10:12:13.130653  521335 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:12:13.130663  521335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:12:13.130733  521335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:12:13.130822  521335 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:12:13.130832  521335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:12:13.130859  521335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:12:13.130922  521335 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:12:13.130947  521335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:12:13.132598  521335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:12:13.132690  521335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-330197 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-330197 localhost minikube]
	I1123 10:12:13.467218  521335 provision.go:177] copyRemoteCerts
	I1123 10:12:13.467313  521335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:12:13.467378  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:13.485943  521335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33486 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:12:13.593636  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:12:13.612581  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 10:12:13.630902  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:12:13.651531  521335 provision.go:87] duration metric: took 541.155217ms to configureAuth
	I1123 10:12:13.651625  521335 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:12:13.651897  521335 config.go:182] Loaded profile config "default-k8s-diff-port-330197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:13.652038  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:13.669166  521335 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:13.669664  521335 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33486 <nil> <nil>}
	I1123 10:12:13.669684  521335 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:12:14.029691  521335 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:12:14.029717  521335 machine.go:97] duration metric: took 4.465451501s to provisionDockerMachine
	I1123 10:12:14.029728  521335 client.go:176] duration metric: took 11.028778497s to LocalClient.Create
	I1123 10:12:14.029742  521335 start.go:167] duration metric: took 11.028850752s to libmachine.API.Create "default-k8s-diff-port-330197"
	I1123 10:12:14.029750  521335 start.go:293] postStartSetup for "default-k8s-diff-port-330197" (driver="docker")
	I1123 10:12:14.029760  521335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:12:14.029837  521335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:12:14.029894  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:14.048601  521335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33486 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:12:14.158196  521335 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:12:14.161587  521335 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:12:14.161622  521335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:12:14.161651  521335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:12:14.161707  521335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:12:14.161841  521335 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:12:14.161957  521335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:12:14.169798  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:12:14.188126  521335 start.go:296] duration metric: took 158.353532ms for postStartSetup
	I1123 10:12:14.188518  521335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-330197
	I1123 10:12:14.205502  521335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/config.json ...
	I1123 10:12:14.205814  521335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:12:14.205859  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:14.225870  521335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33486 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:12:14.330691  521335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:12:14.335699  521335 start.go:128] duration metric: took 11.338544967s to createHost
	I1123 10:12:14.335727  521335 start.go:83] releasing machines lock for "default-k8s-diff-port-330197", held for 11.338674954s
	I1123 10:12:14.335797  521335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-330197
	I1123 10:12:14.363969  521335 ssh_runner.go:195] Run: cat /version.json
	I1123 10:12:14.364010  521335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:12:14.364024  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:14.364071  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:14.392482  521335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33486 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:12:14.394364  521335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33486 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:12:14.597774  521335 ssh_runner.go:195] Run: systemctl --version
	I1123 10:12:14.604169  521335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:12:14.641314  521335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:12:14.646701  521335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:12:14.646774  521335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:12:14.678421  521335 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 10:12:14.678447  521335 start.go:496] detecting cgroup driver to use...
	I1123 10:12:14.678505  521335 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:12:14.678570  521335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:12:14.698532  521335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:12:14.711655  521335 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:12:14.711744  521335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:12:14.731420  521335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:12:14.750559  521335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:12:14.872453  521335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:12:15.006339  521335 docker.go:234] disabling docker service ...
	I1123 10:12:15.006451  521335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:12:15.045215  521335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:12:15.062932  521335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:12:15.193690  521335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:12:15.309296  521335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:12:15.323905  521335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:12:15.338980  521335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:12:15.339107  521335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:15.347998  521335 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:12:15.348124  521335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:15.357317  521335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:15.366537  521335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:15.375913  521335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:12:15.389659  521335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:15.399769  521335 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:15.414617  521335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:15.424502  521335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:12:15.432559  521335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:12:15.440061  521335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:15.560392  521335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:12:15.730633  521335 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:12:15.730781  521335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:12:15.735309  521335 start.go:564] Will wait 60s for crictl version
	I1123 10:12:15.735444  521335 ssh_runner.go:195] Run: which crictl
	I1123 10:12:15.739096  521335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:12:15.766278  521335 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:12:15.766411  521335 ssh_runner.go:195] Run: crio --version
	I1123 10:12:15.798251  521335 ssh_runner.go:195] Run: crio --version
	I1123 10:12:15.832831  521335 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:12:15.835916  521335 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-330197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:12:15.852584  521335 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 10:12:15.856455  521335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:12:15.866697  521335 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-330197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-330197 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:12:15.866826  521335 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:12:15.866878  521335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:12:15.899067  521335 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:12:15.899092  521335 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:12:15.899150  521335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:12:15.924509  521335 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:12:15.924534  521335 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:12:15.924543  521335 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1123 10:12:15.924631  521335 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-330197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-330197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:12:15.924726  521335 ssh_runner.go:195] Run: crio config
	I1123 10:12:15.989288  521335 cni.go:84] Creating CNI manager for ""
	I1123 10:12:15.989311  521335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:12:15.989347  521335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:12:15.989378  521335 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-330197 NodeName:default-k8s-diff-port-330197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:12:15.989567  521335 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-330197"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:12:15.989662  521335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:12:15.998451  521335 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:12:15.998522  521335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:12:16.007264  521335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1123 10:12:16.022493  521335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:12:16.037689  521335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1123 10:12:16.052896  521335 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:12:16.057262  521335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:12:16.067762  521335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:16.208252  521335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:12:16.227500  521335 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197 for IP: 192.168.85.2
	I1123 10:12:16.227525  521335 certs.go:195] generating shared ca certs ...
	I1123 10:12:16.227541  521335 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:16.227680  521335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:12:16.227727  521335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:12:16.227739  521335 certs.go:257] generating profile certs ...
	I1123 10:12:16.227795  521335 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/client.key
	I1123 10:12:16.227814  521335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/client.crt with IP's: []
	I1123 10:12:16.412892  521335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/client.crt ...
	I1123 10:12:16.412923  521335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/client.crt: {Name:mk6ee7db5250737b14520e0bea3079b97aa15138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:16.413119  521335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/client.key ...
	I1123 10:12:16.413137  521335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/client.key: {Name:mk36e62ce40cc4789021b11124dead36c53ba6da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:16.413236  521335 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.key.d6400e66
	I1123 10:12:16.413255  521335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.crt.d6400e66 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 10:12:16.534278  521335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.crt.d6400e66 ...
	I1123 10:12:16.534323  521335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.crt.d6400e66: {Name:mk6c93a1de3543306621cb84bb0b4e22c0fe5d9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:16.534523  521335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.key.d6400e66 ...
	I1123 10:12:16.534541  521335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.key.d6400e66: {Name:mk7f9dff2a92d36a93f2097625cf54786b76bd32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:16.534623  521335 certs.go:382] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.crt.d6400e66 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.crt
	I1123 10:12:16.534713  521335 certs.go:386] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.key.d6400e66 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.key
	I1123 10:12:16.534794  521335 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/proxy-client.key
	I1123 10:12:16.534814  521335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/proxy-client.crt with IP's: []
	I1123 10:12:16.785215  521335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/proxy-client.crt ...
	I1123 10:12:16.785246  521335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/proxy-client.crt: {Name:mk3e0371e5d369ba4cf9896fb345e539b29b51d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:16.785445  521335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/proxy-client.key ...
	I1123 10:12:16.785460  521335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/proxy-client.key: {Name:mk9481834fb793cf57c36b6770a2cb24bc96248a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:16.785664  521335 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:12:16.785713  521335 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:12:16.785725  521335 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:12:16.785757  521335 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:12:16.785786  521335 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:12:16.785813  521335 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:12:16.785862  521335 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:12:16.786431  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:12:16.805265  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:12:16.831066  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:12:16.851848  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:12:16.871382  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 10:12:16.890599  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:12:16.908792  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:12:16.929495  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:12:16.947751  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:12:16.965092  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:12:16.983957  521335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:12:17.002179  521335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:12:17.016491  521335 ssh_runner.go:195] Run: openssl version
	I1123 10:12:17.027206  521335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:12:17.036864  521335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:12:17.040831  521335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:12:17.040901  521335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:12:17.082230  521335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:12:17.090911  521335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:12:17.099097  521335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:12:17.103460  521335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:12:17.103523  521335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:12:17.146678  521335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:12:17.155474  521335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:12:17.163995  521335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:12:17.168297  521335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:12:17.168359  521335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:12:17.213526  521335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:12:17.222071  521335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:12:17.226333  521335 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:12:17.226424  521335 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-330197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-330197 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:12:17.226538  521335 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:12:17.226605  521335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:12:17.256365  521335 cri.go:89] found id: ""
	I1123 10:12:17.256433  521335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:12:17.264719  521335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:12:17.272715  521335 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:12:17.272810  521335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:12:17.281746  521335 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:12:17.281777  521335 kubeadm.go:158] found existing configuration files:
	
	I1123 10:12:17.281847  521335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 10:12:17.290198  521335 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:12:17.290281  521335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:12:17.297777  521335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 10:12:17.305580  521335 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:12:17.305677  521335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:12:17.313226  521335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 10:12:17.321385  521335 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:12:17.321531  521335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:12:17.329570  521335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 10:12:17.337590  521335 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:12:17.337691  521335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:12:17.345084  521335 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:12:17.390567  521335 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:12:17.390846  521335 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:12:17.423626  521335 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:12:17.423702  521335 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:12:17.423742  521335 kubeadm.go:319] OS: Linux
	I1123 10:12:17.423791  521335 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:12:17.423844  521335 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:12:17.423896  521335 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:12:17.423947  521335 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:12:17.423998  521335 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:12:17.424057  521335 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:12:17.424108  521335 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:12:17.424161  521335 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:12:17.424210  521335 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:12:17.495373  521335 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:12:17.495489  521335 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:12:17.495587  521335 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:12:17.503221  521335 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:12:17.509788  521335 out.go:252]   - Generating certificates and keys ...
	I1123 10:12:17.509918  521335 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:12:17.510016  521335 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	
	
	==> CRI-O <==
	Nov 23 10:12:08 embed-certs-566990 crio[839]: time="2025-11-23T10:12:08.146938655Z" level=info msg="Created container b025124b04711d017cb5668c870b7ed681b709f2527914292a84af9fd2a3a5f1: kube-system/coredns-66bc5c9577-d8sh7/coredns" id=5622d8b8-0d12-4842-bbad-79bcc44d4231 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:12:08 embed-certs-566990 crio[839]: time="2025-11-23T10:12:08.147800405Z" level=info msg="Starting container: b025124b04711d017cb5668c870b7ed681b709f2527914292a84af9fd2a3a5f1" id=5b263968-cc1b-42ec-a6d0-5a63cd7bfc59 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:12:08 embed-certs-566990 crio[839]: time="2025-11-23T10:12:08.154857481Z" level=info msg="Started container" PID=1728 containerID=b025124b04711d017cb5668c870b7ed681b709f2527914292a84af9fd2a3a5f1 description=kube-system/coredns-66bc5c9577-d8sh7/coredns id=5b263968-cc1b-42ec-a6d0-5a63cd7bfc59 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d8dbb47bf8b758103ec7550e2f792c0b76efb17fd548dbc7a478bdbdd36b2c5
	Nov 23 10:12:11 embed-certs-566990 crio[839]: time="2025-11-23T10:12:11.138823182Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5a246a3b-792e-47f4-aa44-f902f88954af name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:12:11 embed-certs-566990 crio[839]: time="2025-11-23T10:12:11.138949879Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:12:11 embed-certs-566990 crio[839]: time="2025-11-23T10:12:11.144608155Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8e54fd3f98f6d174538fe1998307442cc3cbb9756ecca4547f8ff5d4fbb33278 UID:f36b53a6-0047-4dbc-9603-6a1965a89bb6 NetNS:/var/run/netns/ba2c3353-3165-425a-a0e5-fc74986bc4f6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012af70}] Aliases:map[]}"
	Nov 23 10:12:11 embed-certs-566990 crio[839]: time="2025-11-23T10:12:11.144781376Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 10:12:11 embed-certs-566990 crio[839]: time="2025-11-23T10:12:11.155539264Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8e54fd3f98f6d174538fe1998307442cc3cbb9756ecca4547f8ff5d4fbb33278 UID:f36b53a6-0047-4dbc-9603-6a1965a89bb6 NetNS:/var/run/netns/ba2c3353-3165-425a-a0e5-fc74986bc4f6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012af70}] Aliases:map[]}"
	Nov 23 10:12:11 embed-certs-566990 crio[839]: time="2025-11-23T10:12:11.155857834Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 10:12:11 embed-certs-566990 crio[839]: time="2025-11-23T10:12:11.160504712Z" level=info msg="Ran pod sandbox 8e54fd3f98f6d174538fe1998307442cc3cbb9756ecca4547f8ff5d4fbb33278 with infra container: default/busybox/POD" id=5a246a3b-792e-47f4-aa44-f902f88954af name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:12:11 embed-certs-566990 crio[839]: time="2025-11-23T10:12:11.162088123Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=84b7accc-965a-4bd9-b8da-140a3639f442 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:12:11 embed-certs-566990 crio[839]: time="2025-11-23T10:12:11.162232117Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=84b7accc-965a-4bd9-b8da-140a3639f442 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:12:11 embed-certs-566990 crio[839]: time="2025-11-23T10:12:11.162296241Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=84b7accc-965a-4bd9-b8da-140a3639f442 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:12:11 embed-certs-566990 crio[839]: time="2025-11-23T10:12:11.16347592Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3736b333-4fa8-46f1-8b8f-ddfc167bfcaf name=/runtime.v1.ImageService/PullImage
	Nov 23 10:12:11 embed-certs-566990 crio[839]: time="2025-11-23T10:12:11.168858302Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:12:13 embed-certs-566990 crio[839]: time="2025-11-23T10:12:13.193558828Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=3736b333-4fa8-46f1-8b8f-ddfc167bfcaf name=/runtime.v1.ImageService/PullImage
	Nov 23 10:12:13 embed-certs-566990 crio[839]: time="2025-11-23T10:12:13.194443806Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=955dcf38-3633-4fca-8f66-195f90151488 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:12:13 embed-certs-566990 crio[839]: time="2025-11-23T10:12:13.198531308Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=930c36e8-b078-4a05-ba76-a73f9b0b785a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:12:13 embed-certs-566990 crio[839]: time="2025-11-23T10:12:13.205895375Z" level=info msg="Creating container: default/busybox/busybox" id=415dd83d-4fbd-4053-bfb1-11df136c976e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:12:13 embed-certs-566990 crio[839]: time="2025-11-23T10:12:13.206042865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:12:13 embed-certs-566990 crio[839]: time="2025-11-23T10:12:13.211525212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:12:13 embed-certs-566990 crio[839]: time="2025-11-23T10:12:13.212081397Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:12:13 embed-certs-566990 crio[839]: time="2025-11-23T10:12:13.234255982Z" level=info msg="Created container 5d2506326119ef546ca18bf2c2c9554a8e8ea2eff1c41b7ff30077e9b125546b: default/busybox/busybox" id=415dd83d-4fbd-4053-bfb1-11df136c976e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:12:13 embed-certs-566990 crio[839]: time="2025-11-23T10:12:13.240537013Z" level=info msg="Starting container: 5d2506326119ef546ca18bf2c2c9554a8e8ea2eff1c41b7ff30077e9b125546b" id=e5c1711a-06cd-4cdf-b53e-6efd91236f33 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:12:13 embed-certs-566990 crio[839]: time="2025-11-23T10:12:13.243295228Z" level=info msg="Started container" PID=1781 containerID=5d2506326119ef546ca18bf2c2c9554a8e8ea2eff1c41b7ff30077e9b125546b description=default/busybox/busybox id=e5c1711a-06cd-4cdf-b53e-6efd91236f33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e54fd3f98f6d174538fe1998307442cc3cbb9756ecca4547f8ff5d4fbb33278
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	5d2506326119e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   8e54fd3f98f6d       busybox                                      default
	b025124b04711       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   2d8dbb47bf8b7       coredns-66bc5c9577-d8sh7                     kube-system
	6bd6385636c00       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   6b92ff07746d2       storage-provisioner                          kube-system
	a89cb858e3ad5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   7d216d6a56ae1       kindnet-p6kh4                                kube-system
	4d71e6ac13124       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   b2340c4b48b99       kube-proxy-k4lvf                             kube-system
	2eab63f404f2a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   5cee12a7a6513       kube-controller-manager-embed-certs-566990   kube-system
	d453a88b90619       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   f12701519e0e8       kube-scheduler-embed-certs-566990            kube-system
	0fd2f79f20c34       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   d84bda42f8372       etcd-embed-certs-566990                      kube-system
	fc8f8f6333544       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   4dace2bd494db       kube-apiserver-embed-certs-566990            kube-system
	
	
	==> coredns [b025124b04711d017cb5668c870b7ed681b709f2527914292a84af9fd2a3a5f1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39140 - 3634 "HINFO IN 131456641405166066.8972983982934069331. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021228289s
	
	
	==> describe nodes <==
	Name:               embed-certs-566990
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-566990
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=embed-certs-566990
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_11_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:11:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-566990
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:12:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:12:07 +0000   Sun, 23 Nov 2025 10:11:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:12:07 +0000   Sun, 23 Nov 2025 10:11:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:12:07 +0000   Sun, 23 Nov 2025 10:11:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:12:07 +0000   Sun, 23 Nov 2025 10:12:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-566990
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                7626cdea-55dc-447c-9203-313e96141bd6
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-d8sh7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-566990                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-p6kh4                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-566990             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-566990    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-k4lvf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-566990             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Warning  CgroupV1                 68s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 68s)  kubelet          Node embed-certs-566990 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 68s)  kubelet          Node embed-certs-566990 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 68s)  kubelet          Node embed-certs-566990 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node embed-certs-566990 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node embed-certs-566990 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node embed-certs-566990 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node embed-certs-566990 event: Registered Node embed-certs-566990 in Controller
	  Normal   NodeReady                13s                kubelet          Node embed-certs-566990 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 09:47] overlayfs: idmapped layers are currently not supported
	[ +12.563591] hrtimer: interrupt took 4093727 ns
	[ +14.190024] overlayfs: idmapped layers are currently not supported
	[Nov23 09:49] overlayfs: idmapped layers are currently not supported
	[Nov23 09:50] overlayfs: idmapped layers are currently not supported
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0fd2f79f20c34159aa664da02394614021b82bcc7f21af85366d388ac80fa027] <==
	{"level":"warn","ts":"2025-11-23T10:11:16.346956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.352282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.374974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.390301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.409491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.428665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.452749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.480343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.483758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.515998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.530314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.548019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.570998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.595163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.640584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.661732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.677296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.701265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.719445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.732497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.755165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.804107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.810270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.825800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:11:16.906004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53630","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:12:21 up  2:54,  0 user,  load average: 5.04, 4.64, 3.56
	Linux embed-certs-566990 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a89cb858e3ad5b8b028d2e7f4c8f7d941499f69d3d0832801e6d618a9b9b4d15] <==
	I1123 10:11:26.770766       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:11:26.771745       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:11:26.771909       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:11:26.771935       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:11:26.771947       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:11:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:11:26.972045       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:11:26.972120       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:11:26.973522       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:11:27.057720       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:11:56.968878       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 10:11:56.974269       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:11:56.974412       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 10:11:56.974497       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 10:11:58.473832       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:11:58.473932       1 metrics.go:72] Registering metrics
	I1123 10:11:58.474041       1 controller.go:711] "Syncing nftables rules"
	I1123 10:12:06.968998       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:12:06.969061       1 main.go:301] handling current node
	I1123 10:12:16.970512       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:12:16.970547       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fc8f8f6333544414f7f5d821d7a5268af20a612c6bc318f176680d92055c6133] <==
	I1123 10:11:17.755777       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 10:11:17.761369       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1123 10:11:17.761645       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 10:11:17.761918       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:11:17.770215       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:11:17.797196       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:11:17.967520       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:11:18.524779       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:11:18.529949       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:11:18.530035       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:11:19.264289       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:11:19.316089       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:11:19.379588       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:11:19.388355       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 10:11:19.389693       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:11:19.397311       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:11:19.689120       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:11:20.487938       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:11:20.505559       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:11:20.519400       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:11:25.389167       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:11:25.396273       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:11:25.585881       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 10:11:25.709879       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1123 10:12:18.992316       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:37840: use of closed network connection
	
	
	==> kube-controller-manager [2eab63f404f2ab507336b12ccdb336fe240f7e8c498a4f2fd7bba197f7500616] <==
	I1123 10:11:24.739475       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 10:11:24.747489       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:11:24.751640       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:11:24.762110       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:11:24.779568       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:11:24.779662       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:11:24.780804       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:11:24.780888       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:11:24.780964       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:11:24.781706       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:11:24.782266       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:11:24.782407       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:11:24.782928       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 10:11:24.782939       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:11:24.782950       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:11:24.782958       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:11:24.785262       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 10:11:24.790487       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:11:24.791551       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 10:11:24.791629       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 10:11:24.791668       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 10:11:24.791679       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 10:11:24.791686       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 10:11:24.801888       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-566990" podCIDRs=["10.244.0.0/24"]
	I1123 10:12:09.729285       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4d71e6ac13124fa2d544ea21c559cd6740ad2f0942f012f7ea41d66d45e22776] <==
	I1123 10:11:26.754553       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:11:26.856635       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:11:26.958548       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:11:26.958646       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 10:11:26.958728       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:11:27.011781       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:11:27.011946       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:11:27.025098       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:11:27.025625       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:11:27.029105       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:11:27.031486       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:11:27.031549       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:11:27.032214       1 config.go:200] "Starting service config controller"
	I1123 10:11:27.032407       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:11:27.034358       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:11:27.034415       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:11:27.034968       1 config.go:309] "Starting node config controller"
	I1123 10:11:27.035018       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:11:27.035048       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:11:27.132368       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 10:11:27.133521       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:11:27.134540       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d453a88b90619fc8a587333c02c7364a9bb7f5f58f69368cf472e1db874c006b] <==
	E1123 10:11:17.726899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:11:17.727052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:11:17.727140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:11:17.727226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:11:17.727310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:11:17.727426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:11:17.727475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:11:17.727513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:11:17.727628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:11:17.727673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:11:17.727707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:11:18.549857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:11:18.661855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 10:11:18.676550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:11:18.700417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:11:18.717989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 10:11:18.728083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:11:18.825338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:11:18.840715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:11:18.870290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:11:18.895311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:11:18.928005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:11:18.983739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:11:19.236147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 10:11:22.012474       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:11:25 embed-certs-566990 kubelet[1291]: I1123 10:11:25.680711    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0047dc25-c013-471f-89b6-22e1399e2dc9-xtables-lock\") pod \"kindnet-p6kh4\" (UID: \"0047dc25-c013-471f-89b6-22e1399e2dc9\") " pod="kube-system/kindnet-p6kh4"
	Nov 23 10:11:25 embed-certs-566990 kubelet[1291]: I1123 10:11:25.680733    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88d44863-5a0e-44f5-9806-2e6e769dc05b-lib-modules\") pod \"kube-proxy-k4lvf\" (UID: \"88d44863-5a0e-44f5-9806-2e6e769dc05b\") " pod="kube-system/kube-proxy-k4lvf"
	Nov 23 10:11:25 embed-certs-566990 kubelet[1291]: I1123 10:11:25.680751    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0047dc25-c013-471f-89b6-22e1399e2dc9-lib-modules\") pod \"kindnet-p6kh4\" (UID: \"0047dc25-c013-471f-89b6-22e1399e2dc9\") " pod="kube-system/kindnet-p6kh4"
	Nov 23 10:11:25 embed-certs-566990 kubelet[1291]: I1123 10:11:25.680769    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0047dc25-c013-471f-89b6-22e1399e2dc9-cni-cfg\") pod \"kindnet-p6kh4\" (UID: \"0047dc25-c013-471f-89b6-22e1399e2dc9\") " pod="kube-system/kindnet-p6kh4"
	Nov 23 10:11:25 embed-certs-566990 kubelet[1291]: E1123 10:11:25.851882    1291 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 10:11:25 embed-certs-566990 kubelet[1291]: E1123 10:11:25.851929    1291 projected.go:196] Error preparing data for projected volume kube-api-access-fvftz for pod kube-system/kube-proxy-k4lvf: configmap "kube-root-ca.crt" not found
	Nov 23 10:11:25 embed-certs-566990 kubelet[1291]: E1123 10:11:25.852014    1291 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/88d44863-5a0e-44f5-9806-2e6e769dc05b-kube-api-access-fvftz podName:88d44863-5a0e-44f5-9806-2e6e769dc05b nodeName:}" failed. No retries permitted until 2025-11-23 10:11:26.351978056 +0000 UTC m=+6.014804663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fvftz" (UniqueName: "kubernetes.io/projected/88d44863-5a0e-44f5-9806-2e6e769dc05b-kube-api-access-fvftz") pod "kube-proxy-k4lvf" (UID: "88d44863-5a0e-44f5-9806-2e6e769dc05b") : configmap "kube-root-ca.crt" not found
	Nov 23 10:11:25 embed-certs-566990 kubelet[1291]: E1123 10:11:25.852321    1291 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 10:11:25 embed-certs-566990 kubelet[1291]: E1123 10:11:25.852401    1291 projected.go:196] Error preparing data for projected volume kube-api-access-7g4c2 for pod kube-system/kindnet-p6kh4: configmap "kube-root-ca.crt" not found
	Nov 23 10:11:25 embed-certs-566990 kubelet[1291]: E1123 10:11:25.852445    1291 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0047dc25-c013-471f-89b6-22e1399e2dc9-kube-api-access-7g4c2 podName:0047dc25-c013-471f-89b6-22e1399e2dc9 nodeName:}" failed. No retries permitted until 2025-11-23 10:11:26.352431028 +0000 UTC m=+6.015257635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7g4c2" (UniqueName: "kubernetes.io/projected/0047dc25-c013-471f-89b6-22e1399e2dc9-kube-api-access-7g4c2") pod "kindnet-p6kh4" (UID: "0047dc25-c013-471f-89b6-22e1399e2dc9") : configmap "kube-root-ca.crt" not found
	Nov 23 10:11:26 embed-certs-566990 kubelet[1291]: I1123 10:11:26.386862    1291 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 10:11:26 embed-certs-566990 kubelet[1291]: W1123 10:11:26.544678    1291 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/crio-b2340c4b48b99298572a652232bea27fa2ba4101cec2e5b3cf7f270717d9185e WatchSource:0}: Error finding container b2340c4b48b99298572a652232bea27fa2ba4101cec2e5b3cf7f270717d9185e: Status 404 returned error can't find the container with id b2340c4b48b99298572a652232bea27fa2ba4101cec2e5b3cf7f270717d9185e
	Nov 23 10:11:26 embed-certs-566990 kubelet[1291]: W1123 10:11:26.575195    1291 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/crio-7d216d6a56ae139c1d1e562bad5f910b73c8ff7921b2f589c06ee021cf8d5f61 WatchSource:0}: Error finding container 7d216d6a56ae139c1d1e562bad5f910b73c8ff7921b2f589c06ee021cf8d5f61: Status 404 returned error can't find the container with id 7d216d6a56ae139c1d1e562bad5f910b73c8ff7921b2f589c06ee021cf8d5f61
	Nov 23 10:11:27 embed-certs-566990 kubelet[1291]: I1123 10:11:27.632740    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-p6kh4" podStartSLOduration=2.632719902 podStartE2EDuration="2.632719902s" podCreationTimestamp="2025-11-23 10:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:11:27.632007234 +0000 UTC m=+7.294833857" watchObservedRunningTime="2025-11-23 10:11:27.632719902 +0000 UTC m=+7.295546509"
	Nov 23 10:11:27 embed-certs-566990 kubelet[1291]: I1123 10:11:27.697142    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k4lvf" podStartSLOduration=2.697106538 podStartE2EDuration="2.697106538s" podCreationTimestamp="2025-11-23 10:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:11:27.696466692 +0000 UTC m=+7.359293348" watchObservedRunningTime="2025-11-23 10:11:27.697106538 +0000 UTC m=+7.359933154"
	Nov 23 10:12:07 embed-certs-566990 kubelet[1291]: I1123 10:12:07.445280    1291 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 10:12:07 embed-certs-566990 kubelet[1291]: I1123 10:12:07.704988    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clmzk\" (UniqueName: \"kubernetes.io/projected/9f1e25da-6804-44f0-aa70-5ff52015cd12-kube-api-access-clmzk\") pod \"storage-provisioner\" (UID: \"9f1e25da-6804-44f0-aa70-5ff52015cd12\") " pod="kube-system/storage-provisioner"
	Nov 23 10:12:07 embed-certs-566990 kubelet[1291]: I1123 10:12:07.705044    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/737943ee-552c-4a07-aa55-978b687c5b59-config-volume\") pod \"coredns-66bc5c9577-d8sh7\" (UID: \"737943ee-552c-4a07-aa55-978b687c5b59\") " pod="kube-system/coredns-66bc5c9577-d8sh7"
	Nov 23 10:12:07 embed-certs-566990 kubelet[1291]: I1123 10:12:07.705069    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9f1e25da-6804-44f0-aa70-5ff52015cd12-tmp\") pod \"storage-provisioner\" (UID: \"9f1e25da-6804-44f0-aa70-5ff52015cd12\") " pod="kube-system/storage-provisioner"
	Nov 23 10:12:07 embed-certs-566990 kubelet[1291]: I1123 10:12:07.705091    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfxvf\" (UniqueName: \"kubernetes.io/projected/737943ee-552c-4a07-aa55-978b687c5b59-kube-api-access-gfxvf\") pod \"coredns-66bc5c9577-d8sh7\" (UID: \"737943ee-552c-4a07-aa55-978b687c5b59\") " pod="kube-system/coredns-66bc5c9577-d8sh7"
	Nov 23 10:12:08 embed-certs-566990 kubelet[1291]: W1123 10:12:08.064140    1291 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/crio-2d8dbb47bf8b758103ec7550e2f792c0b76efb17fd548dbc7a478bdbdd36b2c5 WatchSource:0}: Error finding container 2d8dbb47bf8b758103ec7550e2f792c0b76efb17fd548dbc7a478bdbdd36b2c5: Status 404 returned error can't find the container with id 2d8dbb47bf8b758103ec7550e2f792c0b76efb17fd548dbc7a478bdbdd36b2c5
	Nov 23 10:12:08 embed-certs-566990 kubelet[1291]: I1123 10:12:08.769008    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-d8sh7" podStartSLOduration=43.768988104 podStartE2EDuration="43.768988104s" podCreationTimestamp="2025-11-23 10:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:12:08.74225602 +0000 UTC m=+48.405082627" watchObservedRunningTime="2025-11-23 10:12:08.768988104 +0000 UTC m=+48.431814719"
	Nov 23 10:12:08 embed-certs-566990 kubelet[1291]: I1123 10:12:08.789651    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.789631193 podStartE2EDuration="42.789631193s" podCreationTimestamp="2025-11-23 10:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:12:08.769786944 +0000 UTC m=+48.432613559" watchObservedRunningTime="2025-11-23 10:12:08.789631193 +0000 UTC m=+48.452457800"
	Nov 23 10:12:10 embed-certs-566990 kubelet[1291]: I1123 10:12:10.936779    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66w6d\" (UniqueName: \"kubernetes.io/projected/f36b53a6-0047-4dbc-9603-6a1965a89bb6-kube-api-access-66w6d\") pod \"busybox\" (UID: \"f36b53a6-0047-4dbc-9603-6a1965a89bb6\") " pod="default/busybox"
	Nov 23 10:12:11 embed-certs-566990 kubelet[1291]: W1123 10:12:11.160030    1291 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/crio-8e54fd3f98f6d174538fe1998307442cc3cbb9756ecca4547f8ff5d4fbb33278 WatchSource:0}: Error finding container 8e54fd3f98f6d174538fe1998307442cc3cbb9756ecca4547f8ff5d4fbb33278: Status 404 returned error can't find the container with id 8e54fd3f98f6d174538fe1998307442cc3cbb9756ecca4547f8ff5d4fbb33278
	
	
	==> storage-provisioner [6bd6385636c0019ff0f6522c98ae530d0e6c8b384ad3cc9335381323ff586948] <==
	I1123 10:12:08.109370       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:12:08.126883       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:12:08.127019       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:12:08.133523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:12:08.146382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:12:08.146639       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:12:08.166271       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5390d33c-adeb-4208-bc55-623048fa6ee4", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-566990_5c4b2870-3a43-43a2-ae98-359181d9ffd9 became leader
	I1123 10:12:08.166426       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-566990_5c4b2870-3a43-43a2-ae98-359181d9ffd9!
	W1123 10:12:08.167118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:12:08.186740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:12:08.270956       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-566990_5c4b2870-3a43-43a2-ae98-359181d9ffd9!
	W1123 10:12:10.190690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:12:10.203051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:12:12.205997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:12:12.210444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:12:14.214475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:12:14.221288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:12:16.224488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:12:16.241500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:12:18.245809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:12:18.255082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:12:20.258595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:12:20.265879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-566990 -n embed-certs-566990
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-566990 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-330197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-330197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (280.37317ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:13:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-330197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-330197 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-330197 describe deploy/metrics-server -n kube-system: exit status 1 (118.845897ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-330197 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-330197
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-330197:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c",
	        "Created": "2025-11-23T10:12:08.256335726Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 521791,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:12:08.316452073Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/hostname",
	        "HostsPath": "/var/lib/docker/containers/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/hosts",
	        "LogPath": "/var/lib/docker/containers/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c-json.log",
	        "Name": "/default-k8s-diff-port-330197",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-330197:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-330197",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c",
	                "LowerDir": "/var/lib/docker/overlay2/3f48485d51ecbb271eab092e267a4905e984900c5592bc0d63966db4bfd4a0c4-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3f48485d51ecbb271eab092e267a4905e984900c5592bc0d63966db4bfd4a0c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3f48485d51ecbb271eab092e267a4905e984900c5592bc0d63966db4bfd4a0c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3f48485d51ecbb271eab092e267a4905e984900c5592bc0d63966db4bfd4a0c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-330197",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-330197/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-330197",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-330197",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-330197",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ebecc49d82426ba893133b90fb6973e4ab4501a9c12de892ba114184363d28b3",
	            "SandboxKey": "/var/run/docker/netns/ebecc49d8242",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-330197": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:55:5b:6a:1d:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "648b049bc86ff8eff41f306c615c5a3664920d5b8756357da481331ccc4f062a",
	                    "EndpointID": "26036fcbb6b4e0919ad1f71ae457b268bc800eb764c8fcde7f555981be7ad473",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-330197",
	                        "001c54c15317"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-330197 -n default-k8s-diff-port-330197
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-330197 logs -n 25
E1123 10:13:35.107687  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-330197 logs -n 25: (1.244696794s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-706028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	│ stop    │ -p old-k8s-version-706028 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-706028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p old-k8s-version-706028 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-020224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ stop    │ -p no-preload-020224 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ image   │ old-k8s-version-706028 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ pause   │ -p old-k8s-version-706028 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020224 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:11 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:12 UTC │
	│ image   │ no-preload-020224 image list --format=json                                                                                                                                                                                                    │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:11 UTC │
	│ pause   │ -p no-preload-020224 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │                     │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p disable-driver-mounts-097888                                                                                                                                                                                                               │ disable-driver-mounts-097888 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-566990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │                     │
	│ stop    │ -p embed-certs-566990 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ addons  │ enable dashboard -p embed-certs-566990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-330197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:12:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:12:34.569376  524253 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:12:34.569984  524253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:12:34.570020  524253 out.go:374] Setting ErrFile to fd 2...
	I1123 10:12:34.570040  524253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:12:34.570356  524253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:12:34.570797  524253 out.go:368] Setting JSON to false
	I1123 10:12:34.571792  524253 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10504,"bootTime":1763882251,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:12:34.571892  524253 start.go:143] virtualization:  
	I1123 10:12:34.577581  524253 out.go:179] * [embed-certs-566990] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:12:34.580918  524253 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:12:34.581007  524253 notify.go:221] Checking for updates...
	I1123 10:12:34.585526  524253 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:12:34.588781  524253 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:12:34.591797  524253 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:12:34.595050  524253 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:12:34.598347  524253 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:12:34.601787  524253 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:34.602433  524253 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:12:34.643234  524253 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:12:34.643431  524253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:12:34.748631  524253 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:12:34.738537646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:12:34.748730  524253 docker.go:319] overlay module found
	I1123 10:12:34.752969  524253 out.go:179] * Using the docker driver based on existing profile
	I1123 10:12:34.755784  524253 start.go:309] selected driver: docker
	I1123 10:12:34.755803  524253 start.go:927] validating driver "docker" against &{Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:12:34.755920  524253 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:12:34.756610  524253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:12:34.842409  524253 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:12:34.83283756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:12:34.842749  524253 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:12:34.842785  524253 cni.go:84] Creating CNI manager for ""
	I1123 10:12:34.842845  524253 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:12:34.842895  524253 start.go:353] cluster config:
	{Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:12:34.846210  524253 out.go:179] * Starting "embed-certs-566990" primary control-plane node in "embed-certs-566990" cluster
	I1123 10:12:34.848995  524253 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:12:34.851506  524253 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:12:34.854397  524253 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:12:34.854457  524253 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:12:34.854468  524253 cache.go:65] Caching tarball of preloaded images
	I1123 10:12:34.854564  524253 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:12:34.854581  524253 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:12:34.854694  524253 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/config.json ...
	I1123 10:12:34.854920  524253 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:12:34.886994  524253 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:12:34.887012  524253 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:12:34.887026  524253 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:12:34.887058  524253 start.go:360] acquireMachinesLock for embed-certs-566990: {Name:mkc766faecda88b98c3d85f6aada2ef6121554c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:12:34.887139  524253 start.go:364] duration metric: took 39.409µs to acquireMachinesLock for "embed-certs-566990"
	I1123 10:12:34.887184  524253 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:12:34.887196  524253 fix.go:54] fixHost starting: 
	I1123 10:12:34.887460  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:34.914595  524253 fix.go:112] recreateIfNeeded on embed-certs-566990: state=Stopped err=<nil>
	W1123 10:12:34.914626  524253 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:12:34.340808  521335 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:12:34.345217  521335 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:12:34.345242  521335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:12:34.363990  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:12:34.886231  521335 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:12:34.886345  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:34.886418  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-330197 minikube.k8s.io/updated_at=2025_11_23T10_12_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=default-k8s-diff-port-330197 minikube.k8s.io/primary=true
	I1123 10:12:35.179336  521335 ops.go:34] apiserver oom_adj: -16
	I1123 10:12:35.179445  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:35.679643  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:36.179506  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:36.679562  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:37.179901  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:37.679592  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:38.179521  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:38.679597  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:38.896707  521335 kubeadm.go:1114] duration metric: took 4.010403249s to wait for elevateKubeSystemPrivileges
	I1123 10:12:38.896738  521335 kubeadm.go:403] duration metric: took 21.670318246s to StartCluster
	I1123 10:12:38.896755  521335 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:38.896813  521335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:12:38.897518  521335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:38.897743  521335 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:12:38.897850  521335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:12:38.898107  521335 config.go:182] Loaded profile config "default-k8s-diff-port-330197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:38.898096  521335 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:12:38.898216  521335 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-330197"
	I1123 10:12:38.898233  521335 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-330197"
	I1123 10:12:38.898261  521335 host.go:66] Checking if "default-k8s-diff-port-330197" exists ...
	I1123 10:12:38.898770  521335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:12:38.899069  521335 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-330197"
	I1123 10:12:38.899090  521335 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-330197"
	I1123 10:12:38.899386  521335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:12:38.900888  521335 out.go:179] * Verifying Kubernetes components...
	I1123 10:12:38.909541  521335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:38.951384  521335 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-330197"
	I1123 10:12:38.951422  521335 host.go:66] Checking if "default-k8s-diff-port-330197" exists ...
	I1123 10:12:38.951845  521335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:12:38.967832  521335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:12:34.917675  524253 out.go:252] * Restarting existing docker container for "embed-certs-566990" ...
	I1123 10:12:34.917783  524253 cli_runner.go:164] Run: docker start embed-certs-566990
	I1123 10:12:35.293213  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:35.321485  524253 kic.go:430] container "embed-certs-566990" state is running.
	I1123 10:12:35.321878  524253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:12:35.342173  524253 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/config.json ...
	I1123 10:12:35.342405  524253 machine.go:94] provisionDockerMachine start ...
	I1123 10:12:35.342468  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:35.368208  524253 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:35.368636  524253 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33491 <nil> <nil>}
	I1123 10:12:35.368650  524253 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:12:35.369235  524253 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46444->127.0.0.1:33491: read: connection reset by peer
	I1123 10:12:38.541099  524253 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-566990
	
	I1123 10:12:38.541123  524253 ubuntu.go:182] provisioning hostname "embed-certs-566990"
	I1123 10:12:38.541252  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:38.562661  524253 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:38.562972  524253 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33491 <nil> <nil>}
	I1123 10:12:38.562990  524253 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-566990 && echo "embed-certs-566990" | sudo tee /etc/hostname
	I1123 10:12:38.731678  524253 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-566990
	
	I1123 10:12:38.731818  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:38.758549  524253 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:38.758869  524253 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33491 <nil> <nil>}
	I1123 10:12:38.758892  524253 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-566990' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-566990/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-566990' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:12:38.941390  524253 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:12:38.941475  524253 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:12:38.941514  524253 ubuntu.go:190] setting up certificates
	I1123 10:12:38.941525  524253 provision.go:84] configureAuth start
	I1123 10:12:38.941588  524253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:12:39.003538  524253 provision.go:143] copyHostCerts
	I1123 10:12:39.003616  524253 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:12:39.003633  524253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:12:39.003738  524253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:12:39.003846  524253 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:12:39.003857  524253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:12:39.003885  524253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:12:39.003943  524253 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:12:39.003953  524253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:12:39.003981  524253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:12:39.004039  524253 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.embed-certs-566990 san=[127.0.0.1 192.168.76.2 embed-certs-566990 localhost minikube]
	I1123 10:12:39.446737  524253 provision.go:177] copyRemoteCerts
	I1123 10:12:39.446803  524253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:12:39.446855  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:39.472012  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:38.971538  521335 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:12:38.971562  521335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:12:38.971625  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:39.003539  521335 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:12:39.003562  521335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:12:39.003632  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:39.074287  521335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33486 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:12:39.105620  521335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33486 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:12:39.381935  521335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:12:39.382040  521335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:12:39.498776  521335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:12:39.502445  521335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:12:39.878737  521335 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-330197" to be "Ready" ...
	I1123 10:12:39.879071  521335 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 10:12:40.394894  521335 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-330197" context rescaled to 1 replicas
	I1123 10:12:40.400579  521335 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:12:39.594174  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:12:39.631432  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:12:39.662326  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:12:39.689967  524253 provision.go:87] duration metric: took 748.419337ms to configureAuth
	I1123 10:12:39.690006  524253 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:12:39.690232  524253 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:39.690357  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:39.717852  524253 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:39.718184  524253 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33491 <nil> <nil>}
	I1123 10:12:39.718209  524253 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:12:40.197207  524253 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:12:40.197283  524253 machine.go:97] duration metric: took 4.854853123s to provisionDockerMachine
	I1123 10:12:40.197318  524253 start.go:293] postStartSetup for "embed-certs-566990" (driver="docker")
	I1123 10:12:40.197373  524253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:12:40.197590  524253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:12:40.197686  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:40.229724  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:40.352159  524253 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:12:40.358411  524253 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:12:40.358451  524253 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:12:40.358470  524253 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:12:40.358548  524253 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:12:40.358642  524253 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:12:40.358766  524253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:12:40.370891  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:12:40.411075  524253 start.go:296] duration metric: took 213.709795ms for postStartSetup
	I1123 10:12:40.411229  524253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:12:40.411293  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:40.444674  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:40.558879  524253 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:12:40.565041  524253 fix.go:56] duration metric: took 5.677837494s for fixHost
	I1123 10:12:40.565083  524253 start.go:83] releasing machines lock for "embed-certs-566990", held for 5.677926414s
	I1123 10:12:40.565157  524253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:12:40.586094  524253 ssh_runner.go:195] Run: cat /version.json
	I1123 10:12:40.586160  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:40.586427  524253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:12:40.586490  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:40.607542  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:40.625364  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:40.726401  524253 ssh_runner.go:195] Run: systemctl --version
	I1123 10:12:40.873672  524253 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:12:40.924087  524253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:12:40.929741  524253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:12:40.929849  524253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:12:40.939063  524253 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:12:40.939091  524253 start.go:496] detecting cgroup driver to use...
	I1123 10:12:40.939153  524253 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:12:40.939270  524253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:12:40.961540  524253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:12:40.982960  524253 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:12:40.983075  524253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:12:41.000648  524253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:12:41.017773  524253 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:12:41.142938  524253 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:12:41.257362  524253 docker.go:234] disabling docker service ...
	I1123 10:12:41.257447  524253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:12:41.274100  524253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:12:41.288195  524253 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:12:41.410357  524253 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:12:41.528945  524253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:12:41.542597  524253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:12:41.557753  524253 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:12:41.557821  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.567854  524253 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:12:41.567918  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.576624  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.587089  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.597732  524253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:12:41.606995  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.616231  524253 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.624635  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.633642  524253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:12:41.641219  524253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:12:41.648916  524253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:41.758251  524253 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:12:41.951813  524253 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:12:41.951925  524253 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:12:41.955770  524253 start.go:564] Will wait 60s for crictl version
	I1123 10:12:41.955883  524253 ssh_runner.go:195] Run: which crictl
	I1123 10:12:41.959470  524253 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:12:41.986858  524253 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:12:41.987048  524253 ssh_runner.go:195] Run: crio --version
	I1123 10:12:42.028777  524253 ssh_runner.go:195] Run: crio --version
	I1123 10:12:42.064772  524253 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:12:40.403398  521335 addons.go:530] duration metric: took 1.505302194s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1123 10:12:41.881657  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	I1123 10:12:42.067765  524253 cli_runner.go:164] Run: docker network inspect embed-certs-566990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:12:42.087730  524253 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:12:42.092543  524253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:12:42.104554  524253 kubeadm.go:884] updating cluster {Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:12:42.104708  524253 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:12:42.104775  524253 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:12:42.148463  524253 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:12:42.148490  524253 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:12:42.148557  524253 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:12:42.183482  524253 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:12:42.183511  524253 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:12:42.183520  524253 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:12:42.183631  524253 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-566990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:12:42.183727  524253 ssh_runner.go:195] Run: crio config
	I1123 10:12:42.243185  524253 cni.go:84] Creating CNI manager for ""
	I1123 10:12:42.243216  524253 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:12:42.243250  524253 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:12:42.243278  524253 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-566990 NodeName:embed-certs-566990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:12:42.243415  524253 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-566990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:12:42.243496  524253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:12:42.253200  524253 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:12:42.253283  524253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:12:42.263475  524253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 10:12:42.278930  524253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:12:42.293834  524253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1123 10:12:42.308522  524253 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:12:42.318133  524253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:12:42.328690  524253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:42.450255  524253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:12:42.467317  524253 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990 for IP: 192.168.76.2
	I1123 10:12:42.467386  524253 certs.go:195] generating shared ca certs ...
	I1123 10:12:42.467417  524253 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:42.467593  524253 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:12:42.467667  524253 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:12:42.467703  524253 certs.go:257] generating profile certs ...
	I1123 10:12:42.467842  524253 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.key
	I1123 10:12:42.467921  524253 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key.e8338b8a
	I1123 10:12:42.468004  524253 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key
	I1123 10:12:42.468177  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:12:42.468238  524253 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:12:42.468263  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:12:42.468320  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:12:42.468371  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:12:42.468429  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:12:42.468507  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:12:42.469182  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:12:42.489499  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:12:42.513609  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:12:42.531981  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:12:42.555102  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:12:42.593107  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:12:42.619088  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:12:42.638840  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:12:42.664107  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:12:42.687079  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:12:42.707155  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:12:42.727647  524253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:12:42.741153  524253 ssh_runner.go:195] Run: openssl version
	I1123 10:12:42.747737  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:12:42.756819  524253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:12:42.761068  524253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:12:42.761172  524253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:12:42.808303  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:12:42.816294  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:12:42.824328  524253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:12:42.828098  524253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:12:42.828195  524253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:12:42.874451  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:12:42.883803  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:12:42.892299  524253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:12:42.896079  524253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:12:42.896145  524253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:12:42.937488  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:12:42.945495  524253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:12:42.949292  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:12:42.990640  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:12:43.033735  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:12:43.077119  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:12:43.124424  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:12:43.174673  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:12:43.232910  524253 kubeadm.go:401] StartCluster: {Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:12:43.233002  524253 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:12:43.233065  524253 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:12:43.269122  524253 cri.go:89] found id: "34c25c168914821590128b9aa6e866de7484d016e755b1b4599ef135b1d8e798"
	I1123 10:12:43.269144  524253 cri.go:89] found id: ""
	I1123 10:12:43.269199  524253 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:12:43.282719  524253 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:12:43Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:12:43.282790  524253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:12:43.298825  524253 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:12:43.298846  524253 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:12:43.298897  524253 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:12:43.315206  524253 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:12:43.315813  524253 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-566990" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:12:43.318424  524253 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-566990" cluster setting kubeconfig missing "embed-certs-566990" context setting]
	I1123 10:12:43.319084  524253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:43.322312  524253 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:12:43.353123  524253 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:12:43.353159  524253 kubeadm.go:602] duration metric: took 54.305944ms to restartPrimaryControlPlane
	I1123 10:12:43.353169  524253 kubeadm.go:403] duration metric: took 120.268964ms to StartCluster
	I1123 10:12:43.353194  524253 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:43.353259  524253 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:12:43.354601  524253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:43.354822  524253 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:12:43.355113  524253 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:43.355160  524253 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:12:43.355225  524253 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-566990"
	I1123 10:12:43.355239  524253 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-566990"
	W1123 10:12:43.355250  524253 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:12:43.355270  524253 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:12:43.355693  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:43.356237  524253 addons.go:70] Setting dashboard=true in profile "embed-certs-566990"
	I1123 10:12:43.356271  524253 addons.go:239] Setting addon dashboard=true in "embed-certs-566990"
	W1123 10:12:43.356279  524253 addons.go:248] addon dashboard should already be in state true
	I1123 10:12:43.356301  524253 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:12:43.356703  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:43.359720  524253 addons.go:70] Setting default-storageclass=true in profile "embed-certs-566990"
	I1123 10:12:43.359984  524253 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-566990"
	I1123 10:12:43.360051  524253 out.go:179] * Verifying Kubernetes components...
	I1123 10:12:43.360348  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:43.367497  524253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:43.407800  524253 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:12:43.410904  524253 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:12:43.414371  524253 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:12:43.414380  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:12:43.414465  524253 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:12:43.414542  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:43.418448  524253 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:12:43.418496  524253 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:12:43.418571  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:43.423653  524253 addons.go:239] Setting addon default-storageclass=true in "embed-certs-566990"
	W1123 10:12:43.423687  524253 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:12:43.423712  524253 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:12:43.424159  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:43.484207  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:43.492577  524253 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:12:43.492597  524253 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:12:43.492656  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:43.497098  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:43.521919  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:43.738338  524253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:12:43.742349  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:12:43.742371  524253 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:12:43.752462  524253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:12:43.776705  524253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:12:43.857503  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:12:43.857585  524253 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:12:43.905189  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:12:43.905260  524253 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:12:43.939612  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:12:43.939682  524253 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:12:44.007517  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:12:44.007602  524253 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:12:44.095700  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:12:44.095770  524253 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:12:44.119812  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:12:44.119886  524253 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:12:44.140047  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:12:44.140117  524253 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:12:44.163777  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:12:44.163853  524253 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:12:44.183465  524253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1123 10:12:43.882006  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:45.882054  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	I1123 10:12:48.392297  524253 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.639773581s)
	I1123 10:12:48.392374  524253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.654013377s)
	I1123 10:12:48.392342  524253 node_ready.go:35] waiting up to 6m0s for node "embed-certs-566990" to be "Ready" ...
	I1123 10:12:48.442887  524253 node_ready.go:49] node "embed-certs-566990" is "Ready"
	I1123 10:12:48.442917  524253 node_ready.go:38] duration metric: took 50.463895ms for node "embed-certs-566990" to be "Ready" ...
	I1123 10:12:48.442930  524253 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:12:48.442992  524253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:12:49.420300  524253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.643557424s)
	I1123 10:12:49.459357  524253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.275803306s)
	I1123 10:12:49.459655  524253 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.01664451s)
	I1123 10:12:49.459700  524253 api_server.go:72] duration metric: took 6.104846088s to wait for apiserver process to appear ...
	I1123 10:12:49.459721  524253 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:12:49.459752  524253 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:12:49.462839  524253 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-566990 addons enable metrics-server
	
	I1123 10:12:49.465769  524253 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 10:12:49.468747  524253 addons.go:530] duration metric: took 6.113576404s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 10:12:49.477782  524253 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:12:49.477824  524253 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:12:47.883357  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:50.381379  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:52.381750  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	I1123 10:12:49.960515  524253 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:12:49.968505  524253 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:12:49.969616  524253 api_server.go:141] control plane version: v1.34.1
	I1123 10:12:49.969687  524253 api_server.go:131] duration metric: took 509.945698ms to wait for apiserver health ...
	I1123 10:12:49.969704  524253 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:12:49.973092  524253 system_pods.go:59] 8 kube-system pods found
	I1123 10:12:49.973138  524253 system_pods.go:61] "coredns-66bc5c9577-d8sh7" [737943ee-552c-4a07-aa55-978b687c5b59] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:12:49.973188  524253 system_pods.go:61] "etcd-embed-certs-566990" [020c43bd-55e0-40c2-8119-1370611def91] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:12:49.973195  524253 system_pods.go:61] "kindnet-p6kh4" [0047dc25-c013-471f-89b6-22e1399e2dc9] Running
	I1123 10:12:49.973210  524253 system_pods.go:61] "kube-apiserver-embed-certs-566990" [cdc3c57e-a09e-45f0-85f3-865174df4118] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:12:49.973219  524253 system_pods.go:61] "kube-controller-manager-embed-certs-566990" [8057de0a-12ee-4d41-8535-b1b4db1c022e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:12:49.973245  524253 system_pods.go:61] "kube-proxy-k4lvf" [88d44863-5a0e-44f5-9806-2e6e769dc05b] Running
	I1123 10:12:49.973262  524253 system_pods.go:61] "kube-scheduler-embed-certs-566990" [63997f1e-1056-4acd-a564-a8fddff7356f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:12:49.973278  524253 system_pods.go:61] "storage-provisioner" [9f1e25da-6804-44f0-aa70-5ff52015cd12] Running
	I1123 10:12:49.973292  524253 system_pods.go:74] duration metric: took 3.57404ms to wait for pod list to return data ...
	I1123 10:12:49.973301  524253 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:12:49.976664  524253 default_sa.go:45] found service account: "default"
	I1123 10:12:49.976693  524253 default_sa.go:55] duration metric: took 3.382866ms for default service account to be created ...
	I1123 10:12:49.976704  524253 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:12:49.982365  524253 system_pods.go:86] 8 kube-system pods found
	I1123 10:12:49.982407  524253 system_pods.go:89] "coredns-66bc5c9577-d8sh7" [737943ee-552c-4a07-aa55-978b687c5b59] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:12:49.982451  524253 system_pods.go:89] "etcd-embed-certs-566990" [020c43bd-55e0-40c2-8119-1370611def91] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:12:49.982468  524253 system_pods.go:89] "kindnet-p6kh4" [0047dc25-c013-471f-89b6-22e1399e2dc9] Running
	I1123 10:12:49.982476  524253 system_pods.go:89] "kube-apiserver-embed-certs-566990" [cdc3c57e-a09e-45f0-85f3-865174df4118] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:12:49.982497  524253 system_pods.go:89] "kube-controller-manager-embed-certs-566990" [8057de0a-12ee-4d41-8535-b1b4db1c022e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:12:49.982517  524253 system_pods.go:89] "kube-proxy-k4lvf" [88d44863-5a0e-44f5-9806-2e6e769dc05b] Running
	I1123 10:12:49.982534  524253 system_pods.go:89] "kube-scheduler-embed-certs-566990" [63997f1e-1056-4acd-a564-a8fddff7356f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:12:49.982539  524253 system_pods.go:89] "storage-provisioner" [9f1e25da-6804-44f0-aa70-5ff52015cd12] Running
	I1123 10:12:49.982558  524253 system_pods.go:126] duration metric: took 5.847799ms to wait for k8s-apps to be running ...
	I1123 10:12:49.982573  524253 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:12:49.982651  524253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:12:50.008979  524253 system_svc.go:56] duration metric: took 26.393587ms WaitForService to wait for kubelet
	I1123 10:12:50.009017  524253 kubeadm.go:587] duration metric: took 6.654160393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:12:50.009064  524253 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:12:50.021985  524253 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:12:50.022022  524253 node_conditions.go:123] node cpu capacity is 2
	I1123 10:12:50.022037  524253 node_conditions.go:105] duration metric: took 12.963649ms to run NodePressure ...
	I1123 10:12:50.022074  524253 start.go:242] waiting for startup goroutines ...
	I1123 10:12:50.022089  524253 start.go:247] waiting for cluster config update ...
	I1123 10:12:50.022101  524253 start.go:256] writing updated cluster config ...
	I1123 10:12:50.022423  524253 ssh_runner.go:195] Run: rm -f paused
	I1123 10:12:50.028023  524253 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:12:50.036471  524253 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d8sh7" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:12:52.042385  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:12:54.043234  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:12:54.382172  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:56.382957  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:56.543953  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:12:58.544260  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:12:58.882752  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:01.382144  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:01.043682  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:03.542248  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:03.881887  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:06.381622  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:05.543504  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:08.041974  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:08.881893  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:10.882009  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:10.042479  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:12.542369  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:13.382245  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:15.882066  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:15.045187  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:17.542973  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:17.883543  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:20.381661  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	I1123 10:13:21.382065  521335 node_ready.go:49] node "default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:21.382097  521335 node_ready.go:38] duration metric: took 41.503320364s for node "default-k8s-diff-port-330197" to be "Ready" ...
	I1123 10:13:21.382111  521335 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:13:21.382174  521335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:13:21.396972  521335 api_server.go:72] duration metric: took 42.499187505s to wait for apiserver process to appear ...
	I1123 10:13:21.397004  521335 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:13:21.397025  521335 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 10:13:21.412797  521335 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 10:13:21.415245  521335 api_server.go:141] control plane version: v1.34.1
	I1123 10:13:21.415282  521335 api_server.go:131] duration metric: took 18.270424ms to wait for apiserver health ...
	I1123 10:13:21.415291  521335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:13:21.427092  521335 system_pods.go:59] 8 kube-system pods found
	I1123 10:13:21.427135  521335 system_pods.go:61] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:13:21.427143  521335 system_pods.go:61] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running
	I1123 10:13:21.427149  521335 system_pods.go:61] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:13:21.427161  521335 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running
	I1123 10:13:21.427166  521335 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running
	I1123 10:13:21.427171  521335 system_pods.go:61] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:13:21.427175  521335 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running
	I1123 10:13:21.427181  521335 system_pods.go:61] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:13:21.427194  521335 system_pods.go:74] duration metric: took 11.896873ms to wait for pod list to return data ...
	I1123 10:13:21.427202  521335 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:13:21.430678  521335 default_sa.go:45] found service account: "default"
	I1123 10:13:21.430707  521335 default_sa.go:55] duration metric: took 3.491814ms for default service account to be created ...
	I1123 10:13:21.430727  521335 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:13:21.436198  521335 system_pods.go:86] 8 kube-system pods found
	I1123 10:13:21.436235  521335 system_pods.go:89] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:13:21.436243  521335 system_pods.go:89] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running
	I1123 10:13:21.436259  521335 system_pods.go:89] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:13:21.436265  521335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running
	I1123 10:13:21.436270  521335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running
	I1123 10:13:21.436277  521335 system_pods.go:89] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:13:21.436281  521335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running
	I1123 10:13:21.436287  521335 system_pods.go:89] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:13:21.436316  521335 retry.go:31] will retry after 267.688988ms: missing components: kube-dns
	I1123 10:13:21.719855  521335 system_pods.go:86] 8 kube-system pods found
	I1123 10:13:21.719901  521335 system_pods.go:89] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:13:21.719909  521335 system_pods.go:89] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running
	I1123 10:13:21.719916  521335 system_pods.go:89] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:13:21.719920  521335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running
	I1123 10:13:21.719925  521335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running
	I1123 10:13:21.719930  521335 system_pods.go:89] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:13:21.719934  521335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running
	I1123 10:13:21.719954  521335 system_pods.go:89] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:13:21.719971  521335 retry.go:31] will retry after 299.519958ms: missing components: kube-dns
	I1123 10:13:22.024526  521335 system_pods.go:86] 8 kube-system pods found
	I1123 10:13:22.024578  521335 system_pods.go:89] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Running
	I1123 10:13:22.024597  521335 system_pods.go:89] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running
	I1123 10:13:22.024602  521335 system_pods.go:89] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:13:22.024617  521335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running
	I1123 10:13:22.024621  521335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running
	I1123 10:13:22.024626  521335 system_pods.go:89] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:13:22.024630  521335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running
	I1123 10:13:22.024635  521335 system_pods.go:89] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Running
	I1123 10:13:22.024643  521335 system_pods.go:126] duration metric: took 593.910164ms to wait for k8s-apps to be running ...
	I1123 10:13:22.024651  521335 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:13:22.024744  521335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:13:22.044406  521335 system_svc.go:56] duration metric: took 19.746115ms WaitForService to wait for kubelet
	I1123 10:13:22.044435  521335 kubeadm.go:587] duration metric: took 43.146660616s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:13:22.044455  521335 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:13:22.047630  521335 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:13:22.047663  521335 node_conditions.go:123] node cpu capacity is 2
	I1123 10:13:22.047677  521335 node_conditions.go:105] duration metric: took 3.217127ms to run NodePressure ...
	I1123 10:13:22.047694  521335 start.go:242] waiting for startup goroutines ...
	I1123 10:13:22.047702  521335 start.go:247] waiting for cluster config update ...
	I1123 10:13:22.047713  521335 start.go:256] writing updated cluster config ...
	I1123 10:13:22.048016  521335 ssh_runner.go:195] Run: rm -f paused
	I1123 10:13:22.052242  521335 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:13:22.056674  521335 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pphv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.062088  521335 pod_ready.go:94] pod "coredns-66bc5c9577-pphv6" is "Ready"
	I1123 10:13:22.062114  521335 pod_ready.go:86] duration metric: took 5.372157ms for pod "coredns-66bc5c9577-pphv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.064742  521335 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.069906  521335 pod_ready.go:94] pod "etcd-default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:22.069933  521335 pod_ready.go:86] duration metric: took 5.16526ms for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.072948  521335 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.078059  521335 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:22.078084  521335 pod_ready.go:86] duration metric: took 5.111442ms for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.080881  521335 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.457153  521335 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:22.457182  521335 pod_ready.go:86] duration metric: took 376.276326ms for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.657820  521335 pod_ready.go:83] waiting for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:23.057150  521335 pod_ready.go:94] pod "kube-proxy-75qqt" is "Ready"
	I1123 10:13:23.057200  521335 pod_ready.go:86] duration metric: took 399.352644ms for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:23.257221  521335 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:23.657699  521335 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:23.657728  521335 pod_ready.go:86] duration metric: took 400.478199ms for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:23.657742  521335 pod_ready.go:40] duration metric: took 1.605465474s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:13:23.714769  521335 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:13:23.718401  521335 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-330197" cluster and "default" namespace by default
	W1123 10:13:20.044347  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:22.542279  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	I1123 10:13:24.041853  524253 pod_ready.go:94] pod "coredns-66bc5c9577-d8sh7" is "Ready"
	I1123 10:13:24.041901  524253 pod_ready.go:86] duration metric: took 34.005401362s for pod "coredns-66bc5c9577-d8sh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.044925  524253 pod_ready.go:83] waiting for pod "etcd-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.049926  524253 pod_ready.go:94] pod "etcd-embed-certs-566990" is "Ready"
	I1123 10:13:24.049956  524253 pod_ready.go:86] duration metric: took 5.009345ms for pod "etcd-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.052293  524253 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.057957  524253 pod_ready.go:94] pod "kube-apiserver-embed-certs-566990" is "Ready"
	I1123 10:13:24.057987  524253 pod_ready.go:86] duration metric: took 5.668112ms for pod "kube-apiserver-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.060529  524253 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.240980  524253 pod_ready.go:94] pod "kube-controller-manager-embed-certs-566990" is "Ready"
	I1123 10:13:24.241007  524253 pod_ready.go:86] duration metric: took 180.45021ms for pod "kube-controller-manager-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.440356  524253 pod_ready.go:83] waiting for pod "kube-proxy-k4lvf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.840001  524253 pod_ready.go:94] pod "kube-proxy-k4lvf" is "Ready"
	I1123 10:13:24.840031  524253 pod_ready.go:86] duration metric: took 399.646726ms for pod "kube-proxy-k4lvf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:25.040159  524253 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:25.439745  524253 pod_ready.go:94] pod "kube-scheduler-embed-certs-566990" is "Ready"
	I1123 10:13:25.439815  524253 pod_ready.go:86] duration metric: took 399.627786ms for pod "kube-scheduler-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:25.439837  524253 pod_ready.go:40] duration metric: took 35.411776765s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:13:25.495301  524253 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:13:25.498797  524253 out.go:179] * Done! kubectl is now configured to use "embed-certs-566990" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:13:21 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:21.636971102Z" level=info msg="Created container d30e31c0e1b44dd4dc217bd50a877a1176e96b393872c67f831907ed5140be54: kube-system/storage-provisioner/storage-provisioner" id=9c066a92-5347-4fda-aff7-a509dc523b0d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:13:21 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:21.63811883Z" level=info msg="Starting container: d30e31c0e1b44dd4dc217bd50a877a1176e96b393872c67f831907ed5140be54" id=67785682-1672-4c93-b495-6b5c9cab1420 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:13:21 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:21.640067366Z" level=info msg="Started container" PID=1718 containerID=d30e31c0e1b44dd4dc217bd50a877a1176e96b393872c67f831907ed5140be54 description=kube-system/storage-provisioner/storage-provisioner id=67785682-1672-4c93-b495-6b5c9cab1420 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fdc24dab704b98b9df75f9e01c5915d1a9515fda77d3bd7c9c1f7b0c73fad8ac
	Nov 23 10:13:24 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:24.289969612Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7ea03247-7cfb-4812-8d9b-7d393aacfb59 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:13:24 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:24.290041728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:13:24 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:24.295311573Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d8795cd4a10d1b1d2435a205721371d3bcdfc36253e0347716990b60ecdb6d0a UID:4387e28f-77a9-4288-b0ad-d58ae149c2b9 NetNS:/var/run/netns/a3fb894f-fe50-4c34-b8a0-3d94137d0539 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079900}] Aliases:map[]}"
	Nov 23 10:13:24 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:24.295504667Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 10:13:24 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:24.306734707Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d8795cd4a10d1b1d2435a205721371d3bcdfc36253e0347716990b60ecdb6d0a UID:4387e28f-77a9-4288-b0ad-d58ae149c2b9 NetNS:/var/run/netns/a3fb894f-fe50-4c34-b8a0-3d94137d0539 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079900}] Aliases:map[]}"
	Nov 23 10:13:24 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:24.307029973Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 10:13:24 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:24.311625813Z" level=info msg="Ran pod sandbox d8795cd4a10d1b1d2435a205721371d3bcdfc36253e0347716990b60ecdb6d0a with infra container: default/busybox/POD" id=7ea03247-7cfb-4812-8d9b-7d393aacfb59 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:13:24 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:24.330807497Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b8169fad-08ea-48e7-8dde-5756c4bc1232 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:13:24 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:24.330950498Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b8169fad-08ea-48e7-8dde-5756c4bc1232 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:13:24 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:24.330992049Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b8169fad-08ea-48e7-8dde-5756c4bc1232 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:13:24 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:24.331769572Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=764b8ffc-1f7e-4237-9987-56d6fbf42e83 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:13:24 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:24.334754719Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:13:26 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:26.342835513Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=764b8ffc-1f7e-4237-9987-56d6fbf42e83 name=/runtime.v1.ImageService/PullImage
	Nov 23 10:13:26 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:26.343703458Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eef0181d-6667-48e8-8043-5cbae8b86813 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:13:26 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:26.346757472Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=52a10ea1-6ce8-44c1-b919-ae6ec638a33f name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:13:26 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:26.354619367Z" level=info msg="Creating container: default/busybox/busybox" id=601fb887-480d-4418-ab37-9373abe76b0e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:13:26 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:26.354809064Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:13:26 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:26.359928244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:13:26 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:26.360376579Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:13:26 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:26.37515116Z" level=info msg="Created container c4e5192034be87470284717c3257c317443cf6e351ea50ea4b081ef00e06e5a9: default/busybox/busybox" id=601fb887-480d-4418-ab37-9373abe76b0e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:13:26 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:26.377541672Z" level=info msg="Starting container: c4e5192034be87470284717c3257c317443cf6e351ea50ea4b081ef00e06e5a9" id=0d211a49-eef0-47f4-a8c3-9a6d75973835 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:13:26 default-k8s-diff-port-330197 crio[840]: time="2025-11-23T10:13:26.380731533Z" level=info msg="Started container" PID=1776 containerID=c4e5192034be87470284717c3257c317443cf6e351ea50ea4b081ef00e06e5a9 description=default/busybox/busybox id=0d211a49-eef0-47f4-a8c3-9a6d75973835 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8795cd4a10d1b1d2435a205721371d3bcdfc36253e0347716990b60ecdb6d0a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	c4e5192034be8       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   d8795cd4a10d1       busybox                                                default
	d30e31c0e1b44       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   fdc24dab704b9       storage-provisioner                                    kube-system
	e3c789a27e88a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   fe5ef2c4b9021       coredns-66bc5c9577-pphv6                               kube-system
	69db4e33bf090       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   1d259b7f0beca       kindnet-wfv8n                                          kube-system
	c255f0ea4f485       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   8f474e3940627       kube-proxy-75qqt                                       kube-system
	8deb8fcd7f5fb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   9556c4e520c61       kube-controller-manager-default-k8s-diff-port-330197   kube-system
	7868ceff5c333       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   669cd41a8fb76       kube-apiserver-default-k8s-diff-port-330197            kube-system
	1d99e796c461b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   4f61e52e5267f       kube-scheduler-default-k8s-diff-port-330197            kube-system
	c2ef89dcbc5ac       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   6a85acd05b742       etcd-default-k8s-diff-port-330197                      kube-system
	
	
	==> coredns [e3c789a27e88aa1aea75bd1bfaad9f324492de59db6eca01a62877efdf5f802a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41327 - 4242 "HINFO IN 8899734478954721948.1847996092154032658. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013543497s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-330197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-330197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=default-k8s-diff-port-330197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_12_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:12:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-330197
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:13:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:13:21 +0000   Sun, 23 Nov 2025 10:12:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:13:21 +0000   Sun, 23 Nov 2025 10:12:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:13:21 +0000   Sun, 23 Nov 2025 10:12:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:13:21 +0000   Sun, 23 Nov 2025 10:13:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-330197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                9fb10197-c662-4288-a6e4-d39f9ec1d57e
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-pphv6                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-330197                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-wfv8n                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-330197             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-330197    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-75qqt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-330197             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 68s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-330197 event: Registered Node default-k8s-diff-port-330197 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-330197 status is now: NodeReady
	
	
	==> dmesg <==
	[ +14.190024] overlayfs: idmapped layers are currently not supported
	[Nov23 09:49] overlayfs: idmapped layers are currently not supported
	[Nov23 09:50] overlayfs: idmapped layers are currently not supported
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	[Nov23 10:12] overlayfs: idmapped layers are currently not supported
	[ +16.378417] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c2ef89dcbc5ac15da9260b49dab04f247b65918aa2ae5a142f19377a4167cd84] <==
	{"level":"warn","ts":"2025-11-23T10:12:29.027130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.049861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.066744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.086237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.117584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.165631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.211104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.229989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.256953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.297525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.319103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.355250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.381827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.405587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.434402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.454449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.474651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.502303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.529742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.553624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.593056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.626097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.648104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.661242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:29.753670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42942","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:13:34 up  2:56,  0 user,  load average: 3.51, 4.34, 3.54
	Linux default-k8s-diff-port-330197 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [69db4e33bf09096568d293caf4e9c0c04f921950001b30661fc9ac9c6e9bf781] <==
	I1123 10:12:40.576356       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:12:40.658121       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 10:12:40.658407       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:12:40.658475       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:12:40.658549       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:12:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:12:40.858755       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:12:40.858785       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:12:40.858794       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:12:40.859437       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:13:10.859922       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:13:10.859925       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 10:13:10.860040       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 10:13:10.860139       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1123 10:13:12.459478       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:13:12.459603       1 metrics.go:72] Registering metrics
	I1123 10:13:12.459686       1 controller.go:711] "Syncing nftables rules"
	I1123 10:13:20.864419       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:13:20.864475       1 main.go:301] handling current node
	I1123 10:13:30.860028       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:13:30.860065       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7868ceff5c333d04eb8493562f1559ab9daf856b04318b04dcdc72cc30953bb3] <==
	I1123 10:12:30.723969       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 10:12:30.724070       1 policy_source.go:240] refreshing policies
	I1123 10:12:30.759159       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:12:30.790682       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:12:30.794064       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 10:12:30.810763       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:12:30.812051       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:12:30.888581       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:12:31.463397       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:12:31.470810       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:12:31.470896       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:12:32.402711       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:12:32.467651       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:12:32.573088       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:12:32.585732       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 10:12:32.587057       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:12:32.603295       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:12:32.619235       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:12:33.760125       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:12:33.784110       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:12:33.805863       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:12:38.301217       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:12:38.466445       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:12:38.478145       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:12:38.703575       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8deb8fcd7f5fba1d61d1a1f7bc8b8ae7f4e97725a11775963192cd819dbf39b3] <==
	I1123 10:12:37.603014       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:12:37.606259       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:12:37.608540       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:12:37.610669       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:12:37.612922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:12:37.614103       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:12:37.623520       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:12:37.623528       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:12:37.623586       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 10:12:37.623594       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 10:12:37.624726       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:12:37.624847       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:12:37.624904       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:12:37.632688       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:12:37.636585       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:12:37.636686       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:12:37.636761       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-330197"
	I1123 10:12:37.636863       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 10:12:37.646649       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:12:37.646670       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:12:37.646695       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:12:37.648043       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:12:37.654050       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:12:37.654121       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 10:13:22.641796       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c255f0ea4f4858577f19ddd54778746ffa2528c95895fea1971bff31f26b22cb] <==
	I1123 10:12:40.531072       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:12:40.615186       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:12:40.717560       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:12:40.717597       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 10:12:40.717667       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:12:40.742499       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:12:40.742565       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:12:40.746213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:12:40.746566       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:12:40.746589       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:12:40.748703       1 config.go:200] "Starting service config controller"
	I1123 10:12:40.748723       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:12:40.748742       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:12:40.748747       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:12:40.748796       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:12:40.748809       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:12:40.751652       1 config.go:309] "Starting node config controller"
	I1123 10:12:40.751672       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:12:40.751685       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:12:40.852145       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:12:40.852150       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:12:40.852179       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1d99e796c461b6275c6e2929e095d991bc5f186986f455d0d927416e1df57859] <==
	I1123 10:12:31.990467       1 serving.go:386] Generated self-signed cert in-memory
	I1123 10:12:32.889442       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:12:32.889478       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:12:32.896387       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 10:12:32.896788       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:12:32.896803       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:12:32.898625       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:12:32.896815       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:12:32.898741       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:12:32.896740       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:12:32.898477       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 10:12:33.000730       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:12:33.000799       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:12:33.002460       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 23 10:12:37 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:12:37.602482    1298 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 10:12:38 default-k8s-diff-port-330197 kubelet[1298]: E1123 10:12:38.810257    1298 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:default-k8s-diff-port-330197\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-330197' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 23 10:12:38 default-k8s-diff-port-330197 kubelet[1298]: E1123 10:12:38.810698    1298 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-75qqt\" is forbidden: User \"system:node:default-k8s-diff-port-330197\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-330197' and this object" podUID="e9999f1a-4069-470f-9b88-f9bff97ff125" pod="kube-system/kube-proxy-75qqt"
	Nov 23 10:12:38 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:12:38.835688    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e9999f1a-4069-470f-9b88-f9bff97ff125-kube-proxy\") pod \"kube-proxy-75qqt\" (UID: \"e9999f1a-4069-470f-9b88-f9bff97ff125\") " pod="kube-system/kube-proxy-75qqt"
	Nov 23 10:12:38 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:12:38.835830    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9999f1a-4069-470f-9b88-f9bff97ff125-xtables-lock\") pod \"kube-proxy-75qqt\" (UID: \"e9999f1a-4069-470f-9b88-f9bff97ff125\") " pod="kube-system/kube-proxy-75qqt"
	Nov 23 10:12:38 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:12:38.835851    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9999f1a-4069-470f-9b88-f9bff97ff125-lib-modules\") pod \"kube-proxy-75qqt\" (UID: \"e9999f1a-4069-470f-9b88-f9bff97ff125\") " pod="kube-system/kube-proxy-75qqt"
	Nov 23 10:12:38 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:12:38.835869    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9bs4\" (UniqueName: \"kubernetes.io/projected/e9999f1a-4069-470f-9b88-f9bff97ff125-kube-api-access-g9bs4\") pod \"kube-proxy-75qqt\" (UID: \"e9999f1a-4069-470f-9b88-f9bff97ff125\") " pod="kube-system/kube-proxy-75qqt"
	Nov 23 10:12:39 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:12:39.042037    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aa574e11-da93-494e-8803-f1af18bb542d-cni-cfg\") pod \"kindnet-wfv8n\" (UID: \"aa574e11-da93-494e-8803-f1af18bb542d\") " pod="kube-system/kindnet-wfv8n"
	Nov 23 10:12:39 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:12:39.042765    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa574e11-da93-494e-8803-f1af18bb542d-xtables-lock\") pod \"kindnet-wfv8n\" (UID: \"aa574e11-da93-494e-8803-f1af18bb542d\") " pod="kube-system/kindnet-wfv8n"
	Nov 23 10:12:39 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:12:39.042807    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9sz9\" (UniqueName: \"kubernetes.io/projected/aa574e11-da93-494e-8803-f1af18bb542d-kube-api-access-b9sz9\") pod \"kindnet-wfv8n\" (UID: \"aa574e11-da93-494e-8803-f1af18bb542d\") " pod="kube-system/kindnet-wfv8n"
	Nov 23 10:12:39 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:12:39.042932    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa574e11-da93-494e-8803-f1af18bb542d-lib-modules\") pod \"kindnet-wfv8n\" (UID: \"aa574e11-da93-494e-8803-f1af18bb542d\") " pod="kube-system/kindnet-wfv8n"
	Nov 23 10:12:40 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:12:40.085524    1298 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 10:12:40 default-k8s-diff-port-330197 kubelet[1298]: W1123 10:12:40.312474    1298 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/crio-8f474e3940627a5a707ecdfa2ebb791afc5a8241a7cb000ceadf0357783e24ad WatchSource:0}: Error finding container 8f474e3940627a5a707ecdfa2ebb791afc5a8241a7cb000ceadf0357783e24ad: Status 404 returned error can't find the container with id 8f474e3940627a5a707ecdfa2ebb791afc5a8241a7cb000ceadf0357783e24ad
	Nov 23 10:12:40 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:12:40.835100    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wfv8n" podStartSLOduration=2.835084609 podStartE2EDuration="2.835084609s" podCreationTimestamp="2025-11-23 10:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:12:40.834682954 +0000 UTC m=+7.266553666" watchObservedRunningTime="2025-11-23 10:12:40.835084609 +0000 UTC m=+7.266955322"
	Nov 23 10:12:40 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:12:40.914524    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-75qqt" podStartSLOduration=2.914504507 podStartE2EDuration="2.914504507s" podCreationTimestamp="2025-11-23 10:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:12:40.8819915 +0000 UTC m=+7.313862213" watchObservedRunningTime="2025-11-23 10:12:40.914504507 +0000 UTC m=+7.346375220"
	Nov 23 10:13:21 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:13:21.207275    1298 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 10:13:21 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:13:21.302542    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/41502cc7-b934-4a0a-911f-9fb784b38dc3-tmp\") pod \"storage-provisioner\" (UID: \"41502cc7-b934-4a0a-911f-9fb784b38dc3\") " pod="kube-system/storage-provisioner"
	Nov 23 10:13:21 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:13:21.302597    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8v5h\" (UniqueName: \"kubernetes.io/projected/41502cc7-b934-4a0a-911f-9fb784b38dc3-kube-api-access-g8v5h\") pod \"storage-provisioner\" (UID: \"41502cc7-b934-4a0a-911f-9fb784b38dc3\") " pod="kube-system/storage-provisioner"
	Nov 23 10:13:21 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:13:21.302621    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wc9b\" (UniqueName: \"kubernetes.io/projected/0a9030ea-483e-46e0-8d24-2b0dd1fe99ff-kube-api-access-6wc9b\") pod \"coredns-66bc5c9577-pphv6\" (UID: \"0a9030ea-483e-46e0-8d24-2b0dd1fe99ff\") " pod="kube-system/coredns-66bc5c9577-pphv6"
	Nov 23 10:13:21 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:13:21.302645    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a9030ea-483e-46e0-8d24-2b0dd1fe99ff-config-volume\") pod \"coredns-66bc5c9577-pphv6\" (UID: \"0a9030ea-483e-46e0-8d24-2b0dd1fe99ff\") " pod="kube-system/coredns-66bc5c9577-pphv6"
	Nov 23 10:13:21 default-k8s-diff-port-330197 kubelet[1298]: W1123 10:13:21.576536    1298 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/crio-fe5ef2c4b9021cb53db3e292af040fc48be448bc634f4a13c8957f462ff2d0b1 WatchSource:0}: Error finding container fe5ef2c4b9021cb53db3e292af040fc48be448bc634f4a13c8957f462ff2d0b1: Status 404 returned error can't find the container with id fe5ef2c4b9021cb53db3e292af040fc48be448bc634f4a13c8957f462ff2d0b1
	Nov 23 10:13:21 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:13:21.955499    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.955477703 podStartE2EDuration="41.955477703s" podCreationTimestamp="2025-11-23 10:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:13:21.936826694 +0000 UTC m=+48.368697415" watchObservedRunningTime="2025-11-23 10:13:21.955477703 +0000 UTC m=+48.387348416"
	Nov 23 10:13:23 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:13:23.974109    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pphv6" podStartSLOduration=45.974092638 podStartE2EDuration="45.974092638s" podCreationTimestamp="2025-11-23 10:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:13:21.956276313 +0000 UTC m=+48.388147050" watchObservedRunningTime="2025-11-23 10:13:23.974092638 +0000 UTC m=+50.405963359"
	Nov 23 10:13:24 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:13:24.023037    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n65d8\" (UniqueName: \"kubernetes.io/projected/4387e28f-77a9-4288-b0ad-d58ae149c2b9-kube-api-access-n65d8\") pod \"busybox\" (UID: \"4387e28f-77a9-4288-b0ad-d58ae149c2b9\") " pod="default/busybox"
	Nov 23 10:13:26 default-k8s-diff-port-330197 kubelet[1298]: I1123 10:13:26.957557    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.944199437 podStartE2EDuration="3.957535175s" podCreationTimestamp="2025-11-23 10:13:23 +0000 UTC" firstStartedPulling="2025-11-23 10:13:24.331298311 +0000 UTC m=+50.763169024" lastFinishedPulling="2025-11-23 10:13:26.344634049 +0000 UTC m=+52.776504762" observedRunningTime="2025-11-23 10:13:26.956068082 +0000 UTC m=+53.387938812" watchObservedRunningTime="2025-11-23 10:13:26.957535175 +0000 UTC m=+53.389406101"
	
	
	==> storage-provisioner [d30e31c0e1b44dd4dc217bd50a877a1176e96b393872c67f831907ed5140be54] <==
	I1123 10:13:21.682461       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:13:21.699722       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:13:21.699955       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:13:21.702605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:21.713872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:13:21.714100       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:13:21.714484       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-330197_b865ff72-c69a-4336-939b-fa3c8cce68eb!
	I1123 10:13:21.715517       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c776180-63cd-4909-9a5b-31f492baafc6", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-330197_b865ff72-c69a-4336-939b-fa3c8cce68eb became leader
	W1123 10:13:21.726420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:21.732551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:13:21.815019       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-330197_b865ff72-c69a-4336-939b-fa3c8cce68eb!
	W1123 10:13:23.735992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:23.742718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:25.745550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:25.750143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:27.752883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:27.757293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:29.760103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:29.765041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:31.769191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:31.776196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:33.779547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:33.785562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-330197 -n default-k8s-diff-port-330197
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-330197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-566990 --alsologtostderr -v=1
E1123 10:13:39.314356  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-566990 --alsologtostderr -v=1: exit status 80 (2.584231897s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-566990 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:13:37.408719  527592 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:13:37.408864  527592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:13:37.408871  527592 out.go:374] Setting ErrFile to fd 2...
	I1123 10:13:37.408876  527592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:13:37.409173  527592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:13:37.409526  527592 out.go:368] Setting JSON to false
	I1123 10:13:37.409607  527592 mustload.go:66] Loading cluster: embed-certs-566990
	I1123 10:13:37.410086  527592 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:13:37.410634  527592 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:13:37.431251  527592 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:13:37.431752  527592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:13:37.501395  527592 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-23 10:13:37.491923412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:13:37.502156  527592 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-566990 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 10:13:37.505979  527592 out.go:179] * Pausing node embed-certs-566990 ... 
	I1123 10:13:37.508951  527592 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:13:37.509295  527592 ssh_runner.go:195] Run: systemctl --version
	I1123 10:13:37.509345  527592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:13:37.530040  527592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:13:37.640759  527592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:13:37.661773  527592 pause.go:52] kubelet running: true
	I1123 10:13:37.661870  527592 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:13:37.944498  527592 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:13:37.944586  527592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:13:38.016669  527592 cri.go:89] found id: "1f2c0a1a12843b954c961d5ac9cc2b63a6e365a430f494828ff5d31fa2951e5a"
	I1123 10:13:38.016696  527592 cri.go:89] found id: "19dc4b8d2d9db97e17ff50ea3872f7c8f26c53f8c48c68cbd62ab46f6229554a"
	I1123 10:13:38.016702  527592 cri.go:89] found id: "f6b85f94b8d9fb08196e9f8bebc066233445b88b74d7b58a3b7d49897d952cb5"
	I1123 10:13:38.016706  527592 cri.go:89] found id: "b5bff28be9cd6a59d8450e8ef4e11b37cfe957b8f2342050eeae3e5a4c182b02"
	I1123 10:13:38.016710  527592 cri.go:89] found id: "2b35205fbca876dcf845d877fb53cf5356a2ead6e0e926f5cbe593d89e17d643"
	I1123 10:13:38.016721  527592 cri.go:89] found id: "093ac2649d8d4c27fb9abf9413c73fc91911e373c30d8cfb1b331503417cbb03"
	I1123 10:13:38.016724  527592 cri.go:89] found id: "d1785fb925da49928f40a36ef58b27c751da4842126c62aae26166fa662da54e"
	I1123 10:13:38.016728  527592 cri.go:89] found id: "34c25c168914821590128b9aa6e866de7484d016e755b1b4599ef135b1d8e798"
	I1123 10:13:38.016731  527592 cri.go:89] found id: "f29cb2a59da8783e967adae52ce1168c66382986731fa4200f19d9893b3da9b2"
	I1123 10:13:38.016737  527592 cri.go:89] found id: "955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e"
	I1123 10:13:38.016741  527592 cri.go:89] found id: "cbfffd99f2e092d45a7787fa5a6e7773e4ecef4a0c16e5c9b7dd2f7c68af9e60"
	I1123 10:13:38.016744  527592 cri.go:89] found id: ""
	I1123 10:13:38.016797  527592 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:13:38.029110  527592 retry.go:31] will retry after 208.530032ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:13:38Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:13:38.238641  527592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:13:38.252777  527592 pause.go:52] kubelet running: false
	I1123 10:13:38.252890  527592 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:13:38.460570  527592 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:13:38.460667  527592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:13:38.524300  527592 cri.go:89] found id: "1f2c0a1a12843b954c961d5ac9cc2b63a6e365a430f494828ff5d31fa2951e5a"
	I1123 10:13:38.524326  527592 cri.go:89] found id: "19dc4b8d2d9db97e17ff50ea3872f7c8f26c53f8c48c68cbd62ab46f6229554a"
	I1123 10:13:38.524331  527592 cri.go:89] found id: "f6b85f94b8d9fb08196e9f8bebc066233445b88b74d7b58a3b7d49897d952cb5"
	I1123 10:13:38.524338  527592 cri.go:89] found id: "b5bff28be9cd6a59d8450e8ef4e11b37cfe957b8f2342050eeae3e5a4c182b02"
	I1123 10:13:38.524342  527592 cri.go:89] found id: "2b35205fbca876dcf845d877fb53cf5356a2ead6e0e926f5cbe593d89e17d643"
	I1123 10:13:38.524345  527592 cri.go:89] found id: "093ac2649d8d4c27fb9abf9413c73fc91911e373c30d8cfb1b331503417cbb03"
	I1123 10:13:38.524349  527592 cri.go:89] found id: "d1785fb925da49928f40a36ef58b27c751da4842126c62aae26166fa662da54e"
	I1123 10:13:38.524352  527592 cri.go:89] found id: "34c25c168914821590128b9aa6e866de7484d016e755b1b4599ef135b1d8e798"
	I1123 10:13:38.524355  527592 cri.go:89] found id: "f29cb2a59da8783e967adae52ce1168c66382986731fa4200f19d9893b3da9b2"
	I1123 10:13:38.524361  527592 cri.go:89] found id: "955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e"
	I1123 10:13:38.524365  527592 cri.go:89] found id: "cbfffd99f2e092d45a7787fa5a6e7773e4ecef4a0c16e5c9b7dd2f7c68af9e60"
	I1123 10:13:38.524368  527592 cri.go:89] found id: ""
	I1123 10:13:38.524416  527592 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:13:38.535510  527592 retry.go:31] will retry after 450.609918ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:13:38Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:13:38.987304  527592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:13:39.000594  527592 pause.go:52] kubelet running: false
	I1123 10:13:39.000665  527592 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:13:39.163642  527592 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:13:39.163721  527592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:13:39.229854  527592 cri.go:89] found id: "1f2c0a1a12843b954c961d5ac9cc2b63a6e365a430f494828ff5d31fa2951e5a"
	I1123 10:13:39.229893  527592 cri.go:89] found id: "19dc4b8d2d9db97e17ff50ea3872f7c8f26c53f8c48c68cbd62ab46f6229554a"
	I1123 10:13:39.229899  527592 cri.go:89] found id: "f6b85f94b8d9fb08196e9f8bebc066233445b88b74d7b58a3b7d49897d952cb5"
	I1123 10:13:39.229903  527592 cri.go:89] found id: "b5bff28be9cd6a59d8450e8ef4e11b37cfe957b8f2342050eeae3e5a4c182b02"
	I1123 10:13:39.229906  527592 cri.go:89] found id: "2b35205fbca876dcf845d877fb53cf5356a2ead6e0e926f5cbe593d89e17d643"
	I1123 10:13:39.229915  527592 cri.go:89] found id: "093ac2649d8d4c27fb9abf9413c73fc91911e373c30d8cfb1b331503417cbb03"
	I1123 10:13:39.229919  527592 cri.go:89] found id: "d1785fb925da49928f40a36ef58b27c751da4842126c62aae26166fa662da54e"
	I1123 10:13:39.229922  527592 cri.go:89] found id: "34c25c168914821590128b9aa6e866de7484d016e755b1b4599ef135b1d8e798"
	I1123 10:13:39.229926  527592 cri.go:89] found id: "f29cb2a59da8783e967adae52ce1168c66382986731fa4200f19d9893b3da9b2"
	I1123 10:13:39.229932  527592 cri.go:89] found id: "955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e"
	I1123 10:13:39.229941  527592 cri.go:89] found id: "cbfffd99f2e092d45a7787fa5a6e7773e4ecef4a0c16e5c9b7dd2f7c68af9e60"
	I1123 10:13:39.229944  527592 cri.go:89] found id: ""
	I1123 10:13:39.229994  527592 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:13:39.242993  527592 retry.go:31] will retry after 386.268751ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:13:39Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:13:39.629560  527592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:13:39.643387  527592 pause.go:52] kubelet running: false
	I1123 10:13:39.643497  527592 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:13:39.809170  527592 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:13:39.809273  527592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:13:39.876640  527592 cri.go:89] found id: "1f2c0a1a12843b954c961d5ac9cc2b63a6e365a430f494828ff5d31fa2951e5a"
	I1123 10:13:39.876666  527592 cri.go:89] found id: "19dc4b8d2d9db97e17ff50ea3872f7c8f26c53f8c48c68cbd62ab46f6229554a"
	I1123 10:13:39.876671  527592 cri.go:89] found id: "f6b85f94b8d9fb08196e9f8bebc066233445b88b74d7b58a3b7d49897d952cb5"
	I1123 10:13:39.876675  527592 cri.go:89] found id: "b5bff28be9cd6a59d8450e8ef4e11b37cfe957b8f2342050eeae3e5a4c182b02"
	I1123 10:13:39.876678  527592 cri.go:89] found id: "2b35205fbca876dcf845d877fb53cf5356a2ead6e0e926f5cbe593d89e17d643"
	I1123 10:13:39.876682  527592 cri.go:89] found id: "093ac2649d8d4c27fb9abf9413c73fc91911e373c30d8cfb1b331503417cbb03"
	I1123 10:13:39.876685  527592 cri.go:89] found id: "d1785fb925da49928f40a36ef58b27c751da4842126c62aae26166fa662da54e"
	I1123 10:13:39.876688  527592 cri.go:89] found id: "34c25c168914821590128b9aa6e866de7484d016e755b1b4599ef135b1d8e798"
	I1123 10:13:39.876691  527592 cri.go:89] found id: "f29cb2a59da8783e967adae52ce1168c66382986731fa4200f19d9893b3da9b2"
	I1123 10:13:39.876699  527592 cri.go:89] found id: "955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e"
	I1123 10:13:39.876702  527592 cri.go:89] found id: "cbfffd99f2e092d45a7787fa5a6e7773e4ecef4a0c16e5c9b7dd2f7c68af9e60"
	I1123 10:13:39.876705  527592 cri.go:89] found id: ""
	I1123 10:13:39.876765  527592 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:13:39.891714  527592 out.go:203] 
	W1123 10:13:39.894753  527592 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:13:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:13:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:13:39.894779  527592 out.go:285] * 
	* 
	W1123 10:13:39.901900  527592 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:13:39.904528  527592 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-566990 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-566990
helpers_test.go:243: (dbg) docker inspect embed-certs-566990:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086",
	        "Created": "2025-11-23T10:10:53.870240419Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 524394,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:12:34.954284543Z",
	            "FinishedAt": "2025-11-23T10:12:33.884292739Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/hosts",
	        "LogPath": "/var/lib/docker/containers/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086-json.log",
	        "Name": "/embed-certs-566990",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-566990:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-566990",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086",
	                "LowerDir": "/var/lib/docker/overlay2/574f481259594912a40868acf264102260539315df15d075ad880cdeae35844b-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/574f481259594912a40868acf264102260539315df15d075ad880cdeae35844b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/574f481259594912a40868acf264102260539315df15d075ad880cdeae35844b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/574f481259594912a40868acf264102260539315df15d075ad880cdeae35844b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-566990",
	                "Source": "/var/lib/docker/volumes/embed-certs-566990/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-566990",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-566990",
	                "name.minikube.sigs.k8s.io": "embed-certs-566990",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "94fdc4b1873538abc15feca8061ddbee757bf29fd59ea67cebb460a41fa4dd28",
	            "SandboxKey": "/var/run/docker/netns/94fdc4b18735",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-566990": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:4d:f6:38:fe:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d564915410215420da3cf47698d0501dfe2d9ab80cfbf8100f70d4be821f6796",
	                    "EndpointID": "9ba184d42ae0fdc58acb2d3db23594717af7f362cc61057710e145ce5e8b79c8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-566990",
	                        "8f6ca1334711"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-566990 -n embed-certs-566990
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-566990 -n embed-certs-566990: exit status 2 (377.061382ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-566990 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-566990 logs -n 25: (1.237456954s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-706028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p old-k8s-version-706028 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-020224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ stop    │ -p no-preload-020224 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ image   │ old-k8s-version-706028 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ pause   │ -p old-k8s-version-706028 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020224 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:11 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:12 UTC │
	│ image   │ no-preload-020224 image list --format=json                                                                                                                                                                                                    │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:11 UTC │
	│ pause   │ -p no-preload-020224 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │                     │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p disable-driver-mounts-097888                                                                                                                                                                                                               │ disable-driver-mounts-097888 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-566990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │                     │
	│ stop    │ -p embed-certs-566990 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ addons  │ enable dashboard -p embed-certs-566990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-330197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-330197 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ image   │ embed-certs-566990 image list --format=json                                                                                                                                                                                                   │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ pause   │ -p embed-certs-566990 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:12:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:12:34.569376  524253 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:12:34.569984  524253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:12:34.570020  524253 out.go:374] Setting ErrFile to fd 2...
	I1123 10:12:34.570040  524253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:12:34.570356  524253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:12:34.570797  524253 out.go:368] Setting JSON to false
	I1123 10:12:34.571792  524253 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10504,"bootTime":1763882251,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:12:34.571892  524253 start.go:143] virtualization:  
	I1123 10:12:34.577581  524253 out.go:179] * [embed-certs-566990] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:12:34.580918  524253 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:12:34.581007  524253 notify.go:221] Checking for updates...
	I1123 10:12:34.585526  524253 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:12:34.588781  524253 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:12:34.591797  524253 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:12:34.595050  524253 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:12:34.598347  524253 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:12:34.601787  524253 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:34.602433  524253 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:12:34.643234  524253 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:12:34.643431  524253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:12:34.748631  524253 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:12:34.738537646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:12:34.748730  524253 docker.go:319] overlay module found
	I1123 10:12:34.752969  524253 out.go:179] * Using the docker driver based on existing profile
	I1123 10:12:34.755784  524253 start.go:309] selected driver: docker
	I1123 10:12:34.755803  524253 start.go:927] validating driver "docker" against &{Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:12:34.755920  524253 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:12:34.756610  524253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:12:34.842409  524253 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:12:34.83283756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:12:34.842749  524253 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:12:34.842785  524253 cni.go:84] Creating CNI manager for ""
	I1123 10:12:34.842845  524253 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:12:34.842895  524253 start.go:353] cluster config:
	{Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:12:34.846210  524253 out.go:179] * Starting "embed-certs-566990" primary control-plane node in "embed-certs-566990" cluster
	I1123 10:12:34.848995  524253 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:12:34.851506  524253 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:12:34.854397  524253 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:12:34.854457  524253 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:12:34.854468  524253 cache.go:65] Caching tarball of preloaded images
	I1123 10:12:34.854564  524253 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:12:34.854581  524253 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:12:34.854694  524253 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/config.json ...
	I1123 10:12:34.854920  524253 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:12:34.886994  524253 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:12:34.887012  524253 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:12:34.887026  524253 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:12:34.887058  524253 start.go:360] acquireMachinesLock for embed-certs-566990: {Name:mkc766faecda88b98c3d85f6aada2ef6121554c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:12:34.887139  524253 start.go:364] duration metric: took 39.409µs to acquireMachinesLock for "embed-certs-566990"
	I1123 10:12:34.887184  524253 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:12:34.887196  524253 fix.go:54] fixHost starting: 
	I1123 10:12:34.887460  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:34.914595  524253 fix.go:112] recreateIfNeeded on embed-certs-566990: state=Stopped err=<nil>
	W1123 10:12:34.914626  524253 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:12:34.340808  521335 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:12:34.345217  521335 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:12:34.345242  521335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:12:34.363990  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:12:34.886231  521335 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:12:34.886345  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:34.886418  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-330197 minikube.k8s.io/updated_at=2025_11_23T10_12_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=default-k8s-diff-port-330197 minikube.k8s.io/primary=true
	I1123 10:12:35.179336  521335 ops.go:34] apiserver oom_adj: -16
	I1123 10:12:35.179445  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:35.679643  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:36.179506  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:36.679562  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:37.179901  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:37.679592  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:38.179521  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:38.679597  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:38.896707  521335 kubeadm.go:1114] duration metric: took 4.010403249s to wait for elevateKubeSystemPrivileges
	I1123 10:12:38.896738  521335 kubeadm.go:403] duration metric: took 21.670318246s to StartCluster
	I1123 10:12:38.896755  521335 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:38.896813  521335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:12:38.897518  521335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:38.897743  521335 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:12:38.897850  521335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:12:38.898107  521335 config.go:182] Loaded profile config "default-k8s-diff-port-330197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:38.898096  521335 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:12:38.898216  521335 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-330197"
	I1123 10:12:38.898233  521335 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-330197"
	I1123 10:12:38.898261  521335 host.go:66] Checking if "default-k8s-diff-port-330197" exists ...
	I1123 10:12:38.898770  521335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:12:38.899069  521335 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-330197"
	I1123 10:12:38.899090  521335 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-330197"
	I1123 10:12:38.899386  521335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:12:38.900888  521335 out.go:179] * Verifying Kubernetes components...
	I1123 10:12:38.909541  521335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:38.951384  521335 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-330197"
	I1123 10:12:38.951422  521335 host.go:66] Checking if "default-k8s-diff-port-330197" exists ...
	I1123 10:12:38.951845  521335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:12:38.967832  521335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:12:34.917675  524253 out.go:252] * Restarting existing docker container for "embed-certs-566990" ...
	I1123 10:12:34.917783  524253 cli_runner.go:164] Run: docker start embed-certs-566990
	I1123 10:12:35.293213  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:35.321485  524253 kic.go:430] container "embed-certs-566990" state is running.
	I1123 10:12:35.321878  524253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:12:35.342173  524253 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/config.json ...
	I1123 10:12:35.342405  524253 machine.go:94] provisionDockerMachine start ...
	I1123 10:12:35.342468  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:35.368208  524253 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:35.368636  524253 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33491 <nil> <nil>}
	I1123 10:12:35.368650  524253 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:12:35.369235  524253 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46444->127.0.0.1:33491: read: connection reset by peer
	I1123 10:12:38.541099  524253 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-566990
	
	I1123 10:12:38.541123  524253 ubuntu.go:182] provisioning hostname "embed-certs-566990"
	I1123 10:12:38.541252  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:38.562661  524253 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:38.562972  524253 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33491 <nil> <nil>}
	I1123 10:12:38.562990  524253 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-566990 && echo "embed-certs-566990" | sudo tee /etc/hostname
	I1123 10:12:38.731678  524253 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-566990
	
	I1123 10:12:38.731818  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:38.758549  524253 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:38.758869  524253 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33491 <nil> <nil>}
	I1123 10:12:38.758892  524253 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-566990' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-566990/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-566990' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:12:38.941390  524253 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:12:38.941475  524253 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:12:38.941514  524253 ubuntu.go:190] setting up certificates
	I1123 10:12:38.941525  524253 provision.go:84] configureAuth start
	I1123 10:12:38.941588  524253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:12:39.003538  524253 provision.go:143] copyHostCerts
	I1123 10:12:39.003616  524253 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:12:39.003633  524253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:12:39.003738  524253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:12:39.003846  524253 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:12:39.003857  524253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:12:39.003885  524253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:12:39.003943  524253 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:12:39.003953  524253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:12:39.003981  524253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:12:39.004039  524253 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.embed-certs-566990 san=[127.0.0.1 192.168.76.2 embed-certs-566990 localhost minikube]
	I1123 10:12:39.446737  524253 provision.go:177] copyRemoteCerts
	I1123 10:12:39.446803  524253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:12:39.446855  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:39.472012  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:38.971538  521335 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:12:38.971562  521335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:12:38.971625  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:39.003539  521335 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:12:39.003562  521335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:12:39.003632  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:39.074287  521335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33486 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:12:39.105620  521335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33486 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:12:39.381935  521335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:12:39.382040  521335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:12:39.498776  521335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:12:39.502445  521335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:12:39.878737  521335 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-330197" to be "Ready" ...
	I1123 10:12:39.879071  521335 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 10:12:40.394894  521335 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-330197" context rescaled to 1 replicas
	I1123 10:12:40.400579  521335 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:12:39.594174  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:12:39.631432  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:12:39.662326  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:12:39.689967  524253 provision.go:87] duration metric: took 748.419337ms to configureAuth
	I1123 10:12:39.690006  524253 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:12:39.690232  524253 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:39.690357  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:39.717852  524253 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:39.718184  524253 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33491 <nil> <nil>}
	I1123 10:12:39.718209  524253 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:12:40.197207  524253 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:12:40.197283  524253 machine.go:97] duration metric: took 4.854853123s to provisionDockerMachine
	I1123 10:12:40.197318  524253 start.go:293] postStartSetup for "embed-certs-566990" (driver="docker")
	I1123 10:12:40.197373  524253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:12:40.197590  524253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:12:40.197686  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:40.229724  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:40.352159  524253 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:12:40.358411  524253 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:12:40.358451  524253 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:12:40.358470  524253 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:12:40.358548  524253 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:12:40.358642  524253 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:12:40.358766  524253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:12:40.370891  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:12:40.411075  524253 start.go:296] duration metric: took 213.709795ms for postStartSetup
	I1123 10:12:40.411229  524253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:12:40.411293  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:40.444674  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:40.558879  524253 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:12:40.565041  524253 fix.go:56] duration metric: took 5.677837494s for fixHost
	I1123 10:12:40.565083  524253 start.go:83] releasing machines lock for "embed-certs-566990", held for 5.677926414s
	I1123 10:12:40.565157  524253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:12:40.586094  524253 ssh_runner.go:195] Run: cat /version.json
	I1123 10:12:40.586160  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:40.586427  524253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:12:40.586490  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:40.607542  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:40.625364  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:40.726401  524253 ssh_runner.go:195] Run: systemctl --version
	I1123 10:12:40.873672  524253 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:12:40.924087  524253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:12:40.929741  524253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:12:40.929849  524253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:12:40.939063  524253 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:12:40.939091  524253 start.go:496] detecting cgroup driver to use...
	I1123 10:12:40.939153  524253 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:12:40.939270  524253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:12:40.961540  524253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:12:40.982960  524253 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:12:40.983075  524253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:12:41.000648  524253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:12:41.017773  524253 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:12:41.142938  524253 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:12:41.257362  524253 docker.go:234] disabling docker service ...
	I1123 10:12:41.257447  524253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:12:41.274100  524253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:12:41.288195  524253 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:12:41.410357  524253 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:12:41.528945  524253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:12:41.542597  524253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:12:41.557753  524253 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:12:41.557821  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.567854  524253 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:12:41.567918  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.576624  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.587089  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.597732  524253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:12:41.606995  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.616231  524253 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.624635  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.633642  524253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:12:41.641219  524253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:12:41.648916  524253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:41.758251  524253 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:12:41.951813  524253 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:12:41.951925  524253 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:12:41.955770  524253 start.go:564] Will wait 60s for crictl version
	I1123 10:12:41.955883  524253 ssh_runner.go:195] Run: which crictl
	I1123 10:12:41.959470  524253 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:12:41.986858  524253 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:12:41.987048  524253 ssh_runner.go:195] Run: crio --version
	I1123 10:12:42.028777  524253 ssh_runner.go:195] Run: crio --version
	I1123 10:12:42.064772  524253 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:12:40.403398  521335 addons.go:530] duration metric: took 1.505302194s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1123 10:12:41.881657  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	I1123 10:12:42.067765  524253 cli_runner.go:164] Run: docker network inspect embed-certs-566990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:12:42.087730  524253 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:12:42.092543  524253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:12:42.104554  524253 kubeadm.go:884] updating cluster {Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:12:42.104708  524253 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:12:42.104775  524253 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:12:42.148463  524253 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:12:42.148490  524253 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:12:42.148557  524253 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:12:42.183482  524253 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:12:42.183511  524253 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:12:42.183520  524253 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:12:42.183631  524253 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-566990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:12:42.183727  524253 ssh_runner.go:195] Run: crio config
	I1123 10:12:42.243185  524253 cni.go:84] Creating CNI manager for ""
	I1123 10:12:42.243216  524253 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:12:42.243250  524253 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:12:42.243278  524253 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-566990 NodeName:embed-certs-566990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:12:42.243415  524253 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-566990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:12:42.243496  524253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:12:42.253200  524253 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:12:42.253283  524253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:12:42.263475  524253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 10:12:42.278930  524253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:12:42.293834  524253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1123 10:12:42.308522  524253 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:12:42.318133  524253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:12:42.328690  524253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:42.450255  524253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:12:42.467317  524253 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990 for IP: 192.168.76.2
	I1123 10:12:42.467386  524253 certs.go:195] generating shared ca certs ...
	I1123 10:12:42.467417  524253 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:42.467593  524253 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:12:42.467667  524253 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:12:42.467703  524253 certs.go:257] generating profile certs ...
	I1123 10:12:42.467842  524253 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.key
	I1123 10:12:42.467921  524253 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key.e8338b8a
	I1123 10:12:42.468004  524253 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key
	I1123 10:12:42.468177  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:12:42.468238  524253 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:12:42.468263  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:12:42.468320  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:12:42.468371  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:12:42.468429  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:12:42.468507  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:12:42.469182  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:12:42.489499  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:12:42.513609  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:12:42.531981  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:12:42.555102  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:12:42.593107  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:12:42.619088  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:12:42.638840  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:12:42.664107  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:12:42.687079  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:12:42.707155  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:12:42.727647  524253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:12:42.741153  524253 ssh_runner.go:195] Run: openssl version
	I1123 10:12:42.747737  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:12:42.756819  524253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:12:42.761068  524253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:12:42.761172  524253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:12:42.808303  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:12:42.816294  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:12:42.824328  524253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:12:42.828098  524253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:12:42.828195  524253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:12:42.874451  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:12:42.883803  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:12:42.892299  524253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:12:42.896079  524253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:12:42.896145  524253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:12:42.937488  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:12:42.945495  524253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:12:42.949292  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:12:42.990640  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:12:43.033735  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:12:43.077119  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:12:43.124424  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:12:43.174673  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:12:43.232910  524253 kubeadm.go:401] StartCluster: {Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:12:43.233002  524253 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:12:43.233065  524253 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:12:43.269122  524253 cri.go:89] found id: "34c25c168914821590128b9aa6e866de7484d016e755b1b4599ef135b1d8e798"
	I1123 10:12:43.269144  524253 cri.go:89] found id: ""
	I1123 10:12:43.269199  524253 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:12:43.282719  524253 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:12:43Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:12:43.282790  524253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:12:43.298825  524253 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:12:43.298846  524253 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:12:43.298897  524253 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:12:43.315206  524253 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:12:43.315813  524253 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-566990" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:12:43.318424  524253 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-566990" cluster setting kubeconfig missing "embed-certs-566990" context setting]
	I1123 10:12:43.319084  524253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:43.322312  524253 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:12:43.353123  524253 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:12:43.353159  524253 kubeadm.go:602] duration metric: took 54.305944ms to restartPrimaryControlPlane
	I1123 10:12:43.353169  524253 kubeadm.go:403] duration metric: took 120.268964ms to StartCluster
	I1123 10:12:43.353194  524253 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:43.353259  524253 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:12:43.354601  524253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:43.354822  524253 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:12:43.355113  524253 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:43.355160  524253 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:12:43.355225  524253 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-566990"
	I1123 10:12:43.355239  524253 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-566990"
	W1123 10:12:43.355250  524253 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:12:43.355270  524253 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:12:43.355693  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:43.356237  524253 addons.go:70] Setting dashboard=true in profile "embed-certs-566990"
	I1123 10:12:43.356271  524253 addons.go:239] Setting addon dashboard=true in "embed-certs-566990"
	W1123 10:12:43.356279  524253 addons.go:248] addon dashboard should already be in state true
	I1123 10:12:43.356301  524253 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:12:43.356703  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:43.359720  524253 addons.go:70] Setting default-storageclass=true in profile "embed-certs-566990"
	I1123 10:12:43.359984  524253 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-566990"
	I1123 10:12:43.360051  524253 out.go:179] * Verifying Kubernetes components...
	I1123 10:12:43.360348  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:43.367497  524253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:43.407800  524253 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:12:43.410904  524253 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:12:43.414371  524253 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:12:43.414380  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:12:43.414465  524253 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:12:43.414542  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:43.418448  524253 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:12:43.418496  524253 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:12:43.418571  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:43.423653  524253 addons.go:239] Setting addon default-storageclass=true in "embed-certs-566990"
	W1123 10:12:43.423687  524253 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:12:43.423712  524253 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:12:43.424159  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:43.484207  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:43.492577  524253 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:12:43.492597  524253 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:12:43.492656  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:43.497098  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:43.521919  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:43.738338  524253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:12:43.742349  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:12:43.742371  524253 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:12:43.752462  524253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:12:43.776705  524253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:12:43.857503  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:12:43.857585  524253 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:12:43.905189  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:12:43.905260  524253 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:12:43.939612  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:12:43.939682  524253 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:12:44.007517  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:12:44.007602  524253 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:12:44.095700  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:12:44.095770  524253 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:12:44.119812  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:12:44.119886  524253 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:12:44.140047  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:12:44.140117  524253 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:12:44.163777  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:12:44.163853  524253 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:12:44.183465  524253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1123 10:12:43.882006  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:45.882054  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	I1123 10:12:48.392297  524253 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.639773581s)
	I1123 10:12:48.392374  524253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.654013377s)
	I1123 10:12:48.392342  524253 node_ready.go:35] waiting up to 6m0s for node "embed-certs-566990" to be "Ready" ...
	I1123 10:12:48.442887  524253 node_ready.go:49] node "embed-certs-566990" is "Ready"
	I1123 10:12:48.442917  524253 node_ready.go:38] duration metric: took 50.463895ms for node "embed-certs-566990" to be "Ready" ...
	I1123 10:12:48.442930  524253 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:12:48.442992  524253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:12:49.420300  524253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.643557424s)
	I1123 10:12:49.459357  524253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.275803306s)
	I1123 10:12:49.459655  524253 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.01664451s)
	I1123 10:12:49.459700  524253 api_server.go:72] duration metric: took 6.104846088s to wait for apiserver process to appear ...
	I1123 10:12:49.459721  524253 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:12:49.459752  524253 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:12:49.462839  524253 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-566990 addons enable metrics-server
	
	I1123 10:12:49.465769  524253 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 10:12:49.468747  524253 addons.go:530] duration metric: took 6.113576404s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 10:12:49.477782  524253 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:12:49.477824  524253 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:12:47.883357  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:50.381379  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:52.381750  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	I1123 10:12:49.960515  524253 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:12:49.968505  524253 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:12:49.969616  524253 api_server.go:141] control plane version: v1.34.1
	I1123 10:12:49.969687  524253 api_server.go:131] duration metric: took 509.945698ms to wait for apiserver health ...
	I1123 10:12:49.969704  524253 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:12:49.973092  524253 system_pods.go:59] 8 kube-system pods found
	I1123 10:12:49.973138  524253 system_pods.go:61] "coredns-66bc5c9577-d8sh7" [737943ee-552c-4a07-aa55-978b687c5b59] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:12:49.973188  524253 system_pods.go:61] "etcd-embed-certs-566990" [020c43bd-55e0-40c2-8119-1370611def91] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:12:49.973195  524253 system_pods.go:61] "kindnet-p6kh4" [0047dc25-c013-471f-89b6-22e1399e2dc9] Running
	I1123 10:12:49.973210  524253 system_pods.go:61] "kube-apiserver-embed-certs-566990" [cdc3c57e-a09e-45f0-85f3-865174df4118] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:12:49.973219  524253 system_pods.go:61] "kube-controller-manager-embed-certs-566990" [8057de0a-12ee-4d41-8535-b1b4db1c022e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:12:49.973245  524253 system_pods.go:61] "kube-proxy-k4lvf" [88d44863-5a0e-44f5-9806-2e6e769dc05b] Running
	I1123 10:12:49.973262  524253 system_pods.go:61] "kube-scheduler-embed-certs-566990" [63997f1e-1056-4acd-a564-a8fddff7356f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:12:49.973278  524253 system_pods.go:61] "storage-provisioner" [9f1e25da-6804-44f0-aa70-5ff52015cd12] Running
	I1123 10:12:49.973292  524253 system_pods.go:74] duration metric: took 3.57404ms to wait for pod list to return data ...
	I1123 10:12:49.973301  524253 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:12:49.976664  524253 default_sa.go:45] found service account: "default"
	I1123 10:12:49.976693  524253 default_sa.go:55] duration metric: took 3.382866ms for default service account to be created ...
	I1123 10:12:49.976704  524253 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:12:49.982365  524253 system_pods.go:86] 8 kube-system pods found
	I1123 10:12:49.982407  524253 system_pods.go:89] "coredns-66bc5c9577-d8sh7" [737943ee-552c-4a07-aa55-978b687c5b59] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:12:49.982451  524253 system_pods.go:89] "etcd-embed-certs-566990" [020c43bd-55e0-40c2-8119-1370611def91] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:12:49.982468  524253 system_pods.go:89] "kindnet-p6kh4" [0047dc25-c013-471f-89b6-22e1399e2dc9] Running
	I1123 10:12:49.982476  524253 system_pods.go:89] "kube-apiserver-embed-certs-566990" [cdc3c57e-a09e-45f0-85f3-865174df4118] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:12:49.982497  524253 system_pods.go:89] "kube-controller-manager-embed-certs-566990" [8057de0a-12ee-4d41-8535-b1b4db1c022e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:12:49.982517  524253 system_pods.go:89] "kube-proxy-k4lvf" [88d44863-5a0e-44f5-9806-2e6e769dc05b] Running
	I1123 10:12:49.982534  524253 system_pods.go:89] "kube-scheduler-embed-certs-566990" [63997f1e-1056-4acd-a564-a8fddff7356f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:12:49.982539  524253 system_pods.go:89] "storage-provisioner" [9f1e25da-6804-44f0-aa70-5ff52015cd12] Running
	I1123 10:12:49.982558  524253 system_pods.go:126] duration metric: took 5.847799ms to wait for k8s-apps to be running ...
	I1123 10:12:49.982573  524253 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:12:49.982651  524253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:12:50.008979  524253 system_svc.go:56] duration metric: took 26.393587ms WaitForService to wait for kubelet
	I1123 10:12:50.009017  524253 kubeadm.go:587] duration metric: took 6.654160393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:12:50.009064  524253 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:12:50.021985  524253 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:12:50.022022  524253 node_conditions.go:123] node cpu capacity is 2
	I1123 10:12:50.022037  524253 node_conditions.go:105] duration metric: took 12.963649ms to run NodePressure ...
	I1123 10:12:50.022074  524253 start.go:242] waiting for startup goroutines ...
	I1123 10:12:50.022089  524253 start.go:247] waiting for cluster config update ...
	I1123 10:12:50.022101  524253 start.go:256] writing updated cluster config ...
	I1123 10:12:50.022423  524253 ssh_runner.go:195] Run: rm -f paused
	I1123 10:12:50.028023  524253 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:12:50.036471  524253 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d8sh7" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:12:52.042385  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:12:54.043234  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:12:54.382172  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:56.382957  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:56.543953  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:12:58.544260  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:12:58.882752  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:01.382144  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:01.043682  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:03.542248  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:03.881887  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:06.381622  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:05.543504  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:08.041974  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:08.881893  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:10.882009  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:10.042479  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:12.542369  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:13.382245  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:15.882066  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:15.045187  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:17.542973  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:17.883543  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:20.381661  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	I1123 10:13:21.382065  521335 node_ready.go:49] node "default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:21.382097  521335 node_ready.go:38] duration metric: took 41.503320364s for node "default-k8s-diff-port-330197" to be "Ready" ...
	I1123 10:13:21.382111  521335 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:13:21.382174  521335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:13:21.396972  521335 api_server.go:72] duration metric: took 42.499187505s to wait for apiserver process to appear ...
	I1123 10:13:21.397004  521335 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:13:21.397025  521335 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 10:13:21.412797  521335 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 10:13:21.415245  521335 api_server.go:141] control plane version: v1.34.1
	I1123 10:13:21.415282  521335 api_server.go:131] duration metric: took 18.270424ms to wait for apiserver health ...
	I1123 10:13:21.415291  521335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:13:21.427092  521335 system_pods.go:59] 8 kube-system pods found
	I1123 10:13:21.427135  521335 system_pods.go:61] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:13:21.427143  521335 system_pods.go:61] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running
	I1123 10:13:21.427149  521335 system_pods.go:61] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:13:21.427161  521335 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running
	I1123 10:13:21.427166  521335 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running
	I1123 10:13:21.427171  521335 system_pods.go:61] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:13:21.427175  521335 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running
	I1123 10:13:21.427181  521335 system_pods.go:61] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:13:21.427194  521335 system_pods.go:74] duration metric: took 11.896873ms to wait for pod list to return data ...
	I1123 10:13:21.427202  521335 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:13:21.430678  521335 default_sa.go:45] found service account: "default"
	I1123 10:13:21.430707  521335 default_sa.go:55] duration metric: took 3.491814ms for default service account to be created ...
	I1123 10:13:21.430727  521335 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:13:21.436198  521335 system_pods.go:86] 8 kube-system pods found
	I1123 10:13:21.436235  521335 system_pods.go:89] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:13:21.436243  521335 system_pods.go:89] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running
	I1123 10:13:21.436259  521335 system_pods.go:89] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:13:21.436265  521335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running
	I1123 10:13:21.436270  521335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running
	I1123 10:13:21.436277  521335 system_pods.go:89] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:13:21.436281  521335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running
	I1123 10:13:21.436287  521335 system_pods.go:89] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:13:21.436316  521335 retry.go:31] will retry after 267.688988ms: missing components: kube-dns
	I1123 10:13:21.719855  521335 system_pods.go:86] 8 kube-system pods found
	I1123 10:13:21.719901  521335 system_pods.go:89] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:13:21.719909  521335 system_pods.go:89] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running
	I1123 10:13:21.719916  521335 system_pods.go:89] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:13:21.719920  521335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running
	I1123 10:13:21.719925  521335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running
	I1123 10:13:21.719930  521335 system_pods.go:89] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:13:21.719934  521335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running
	I1123 10:13:21.719954  521335 system_pods.go:89] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:13:21.719971  521335 retry.go:31] will retry after 299.519958ms: missing components: kube-dns
	I1123 10:13:22.024526  521335 system_pods.go:86] 8 kube-system pods found
	I1123 10:13:22.024578  521335 system_pods.go:89] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Running
	I1123 10:13:22.024597  521335 system_pods.go:89] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running
	I1123 10:13:22.024602  521335 system_pods.go:89] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:13:22.024617  521335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running
	I1123 10:13:22.024621  521335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running
	I1123 10:13:22.024626  521335 system_pods.go:89] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:13:22.024630  521335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running
	I1123 10:13:22.024635  521335 system_pods.go:89] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Running
	I1123 10:13:22.024643  521335 system_pods.go:126] duration metric: took 593.910164ms to wait for k8s-apps to be running ...
	I1123 10:13:22.024651  521335 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:13:22.024744  521335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:13:22.044406  521335 system_svc.go:56] duration metric: took 19.746115ms WaitForService to wait for kubelet
	I1123 10:13:22.044435  521335 kubeadm.go:587] duration metric: took 43.146660616s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:13:22.044455  521335 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:13:22.047630  521335 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:13:22.047663  521335 node_conditions.go:123] node cpu capacity is 2
	I1123 10:13:22.047677  521335 node_conditions.go:105] duration metric: took 3.217127ms to run NodePressure ...
	I1123 10:13:22.047694  521335 start.go:242] waiting for startup goroutines ...
	I1123 10:13:22.047702  521335 start.go:247] waiting for cluster config update ...
	I1123 10:13:22.047713  521335 start.go:256] writing updated cluster config ...
	I1123 10:13:22.048016  521335 ssh_runner.go:195] Run: rm -f paused
	I1123 10:13:22.052242  521335 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:13:22.056674  521335 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pphv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.062088  521335 pod_ready.go:94] pod "coredns-66bc5c9577-pphv6" is "Ready"
	I1123 10:13:22.062114  521335 pod_ready.go:86] duration metric: took 5.372157ms for pod "coredns-66bc5c9577-pphv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.064742  521335 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.069906  521335 pod_ready.go:94] pod "etcd-default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:22.069933  521335 pod_ready.go:86] duration metric: took 5.16526ms for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.072948  521335 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.078059  521335 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:22.078084  521335 pod_ready.go:86] duration metric: took 5.111442ms for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.080881  521335 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.457153  521335 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:22.457182  521335 pod_ready.go:86] duration metric: took 376.276326ms for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.657820  521335 pod_ready.go:83] waiting for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:23.057150  521335 pod_ready.go:94] pod "kube-proxy-75qqt" is "Ready"
	I1123 10:13:23.057200  521335 pod_ready.go:86] duration metric: took 399.352644ms for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:23.257221  521335 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:23.657699  521335 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:23.657728  521335 pod_ready.go:86] duration metric: took 400.478199ms for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:23.657742  521335 pod_ready.go:40] duration metric: took 1.605465474s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:13:23.714769  521335 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:13:23.718401  521335 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-330197" cluster and "default" namespace by default
	W1123 10:13:20.044347  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:22.542279  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	I1123 10:13:24.041853  524253 pod_ready.go:94] pod "coredns-66bc5c9577-d8sh7" is "Ready"
	I1123 10:13:24.041901  524253 pod_ready.go:86] duration metric: took 34.005401362s for pod "coredns-66bc5c9577-d8sh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.044925  524253 pod_ready.go:83] waiting for pod "etcd-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.049926  524253 pod_ready.go:94] pod "etcd-embed-certs-566990" is "Ready"
	I1123 10:13:24.049956  524253 pod_ready.go:86] duration metric: took 5.009345ms for pod "etcd-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.052293  524253 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.057957  524253 pod_ready.go:94] pod "kube-apiserver-embed-certs-566990" is "Ready"
	I1123 10:13:24.057987  524253 pod_ready.go:86] duration metric: took 5.668112ms for pod "kube-apiserver-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.060529  524253 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.240980  524253 pod_ready.go:94] pod "kube-controller-manager-embed-certs-566990" is "Ready"
	I1123 10:13:24.241007  524253 pod_ready.go:86] duration metric: took 180.45021ms for pod "kube-controller-manager-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.440356  524253 pod_ready.go:83] waiting for pod "kube-proxy-k4lvf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.840001  524253 pod_ready.go:94] pod "kube-proxy-k4lvf" is "Ready"
	I1123 10:13:24.840031  524253 pod_ready.go:86] duration metric: took 399.646726ms for pod "kube-proxy-k4lvf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:25.040159  524253 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:25.439745  524253 pod_ready.go:94] pod "kube-scheduler-embed-certs-566990" is "Ready"
	I1123 10:13:25.439815  524253 pod_ready.go:86] duration metric: took 399.627786ms for pod "kube-scheduler-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:25.439837  524253 pod_ready.go:40] duration metric: took 35.411776765s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:13:25.495301  524253 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:13:25.498797  524253 out.go:179] * Done! kubectl is now configured to use "embed-certs-566990" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.663624449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.683638933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.684202306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.699922362Z" level=info msg="Created container 955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw/dashboard-metrics-scraper" id=fddbd392-8f19-4dc1-abc3-aba7e3083da9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.702369163Z" level=info msg="Starting container: 955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e" id=987315f1-22d1-4479-8a9a-4bb82b39ef35 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.705462989Z" level=info msg="Started container" PID=1670 containerID=955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw/dashboard-metrics-scraper id=987315f1-22d1-4479-8a9a-4bb82b39ef35 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31152e8e217d068321c511d88ace4be27f5c9844555c44f683a410240eefb3c5
	Nov 23 10:13:26 embed-certs-566990 conmon[1668]: conmon 955185be0a8e3482f73c <ninfo>: container 1670 exited with status 1
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.990815333Z" level=info msg="Removing container: 6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d" id=7fabb1f8-fa63-48b8-ba6d-063eceb0db11 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:13:27 embed-certs-566990 crio[655]: time="2025-11-23T10:13:27.003142998Z" level=info msg="Error loading conmon cgroup of container 6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d: cgroup deleted" id=7fabb1f8-fa63-48b8-ba6d-063eceb0db11 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:13:27 embed-certs-566990 crio[655]: time="2025-11-23T10:13:27.017182281Z" level=info msg="Removed container 6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw/dashboard-metrics-scraper" id=7fabb1f8-fa63-48b8-ba6d-063eceb0db11 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.176923549Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.180651732Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.180687737Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.180710055Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.184403522Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.18443471Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.184459703Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.187649777Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.187684444Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.187707928Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.190910785Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.190944968Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.190968435Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.194089495Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.194127379Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	955185be0a8e3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   31152e8e217d0       dashboard-metrics-scraper-6ffb444bf9-gj2vw   kubernetes-dashboard
	1f2c0a1a12843       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   644fa530760f5       storage-provisioner                          kube-system
	cbfffd99f2e09       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago      Running             kubernetes-dashboard        0                   be26a42acedc3       kubernetes-dashboard-855c9754f9-hmrpb        kubernetes-dashboard
	19dc4b8d2d9db       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   46e4b4ba1ad98       coredns-66bc5c9577-d8sh7                     kube-system
	f6b85f94b8d9f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   367609e92d6d8       kindnet-p6kh4                                kube-system
	1ebd1454aec7a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   fa5d6ef527fc7       busybox                                      default
	b5bff28be9cd6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   644fa530760f5       storage-provisioner                          kube-system
	2b35205fbca87       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   69af23f1ac06d       kube-proxy-k4lvf                             kube-system
	093ac2649d8d4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           57 seconds ago      Running             kube-controller-manager     1                   10f70cb1d509c       kube-controller-manager-embed-certs-566990   kube-system
	d1785fb925da4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           57 seconds ago      Running             kube-scheduler              1                   9426b21756a0a       kube-scheduler-embed-certs-566990            kube-system
	34c25c1689148       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           57 seconds ago      Running             etcd                        1                   4e244f53a0441       etcd-embed-certs-566990                      kube-system
	f29cb2a59da87       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           57 seconds ago      Running             kube-apiserver              1                   ecb076407c2bd       kube-apiserver-embed-certs-566990            kube-system
	
	
	==> coredns [19dc4b8d2d9db97e17ff50ea3872f7c8f26c53f8c48c68cbd62ab46f6229554a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60564 - 44673 "HINFO IN 5131750717403680825.5759646936207733108. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026574495s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-566990
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-566990
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=embed-certs-566990
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_11_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:11:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-566990
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:13:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:13:29 +0000   Sun, 23 Nov 2025 10:11:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:13:29 +0000   Sun, 23 Nov 2025 10:11:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:13:29 +0000   Sun, 23 Nov 2025 10:11:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:13:29 +0000   Sun, 23 Nov 2025 10:12:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-566990
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                7626cdea-55dc-447c-9203-313e96141bd6
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-d8sh7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m16s
	  kube-system                 etcd-embed-certs-566990                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m21s
	  kube-system                 kindnet-p6kh4                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-embed-certs-566990             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-embed-certs-566990    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-k4lvf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-embed-certs-566990             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gj2vw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hmrpb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m14s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Warning  CgroupV1                 2m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s (x8 over 2m29s)  kubelet          Node embed-certs-566990 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s (x8 over 2m29s)  kubelet          Node embed-certs-566990 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s (x8 over 2m29s)  kubelet          Node embed-certs-566990 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m21s                  kubelet          Node embed-certs-566990 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m21s                  kubelet          Node embed-certs-566990 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m21s                  kubelet          Node embed-certs-566990 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m17s                  node-controller  Node embed-certs-566990 event: Registered Node embed-certs-566990 in Controller
	  Normal   NodeReady                94s                    kubelet          Node embed-certs-566990 status is now: NodeReady
	  Normal   Starting                 59s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node embed-certs-566990 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node embed-certs-566990 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node embed-certs-566990 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node embed-certs-566990 event: Registered Node embed-certs-566990 in Controller
	
	
	==> dmesg <==
	[ +14.190024] overlayfs: idmapped layers are currently not supported
	[Nov23 09:49] overlayfs: idmapped layers are currently not supported
	[Nov23 09:50] overlayfs: idmapped layers are currently not supported
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	[Nov23 10:12] overlayfs: idmapped layers are currently not supported
	[ +16.378417] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [34c25c168914821590128b9aa6e866de7484d016e755b1b4599ef135b1d8e798] <==
	{"level":"warn","ts":"2025-11-23T10:12:46.089114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.105130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.130257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.146448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.168739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.185958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.196409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.221146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.240085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.262871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.287179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.296273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.317246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.337492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.361717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.366716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.385383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.424532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.464229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.467665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.468954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.514740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.546161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.558072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.697069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50298","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:13:41 up  2:56,  0 user,  load average: 4.69, 4.57, 3.62
	Linux embed-certs-566990 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f6b85f94b8d9fb08196e9f8bebc066233445b88b74d7b58a3b7d49897d952cb5] <==
	I1123 10:12:48.984896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:12:48.985170       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:12:48.985301       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:12:48.985319       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:12:48.985329       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:12:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:12:49.176090       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:12:49.176108       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:12:49.176117       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:12:49.176393       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:13:19.175885       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 10:13:19.176810       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:13:19.176889       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 10:13:19.176922       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 10:13:20.577230       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:13:20.577261       1 metrics.go:72] Registering metrics
	I1123 10:13:20.577316       1 controller.go:711] "Syncing nftables rules"
	I1123 10:13:29.176600       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:13:29.176656       1 main.go:301] handling current node
	I1123 10:13:39.177547       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:13:39.177591       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f29cb2a59da8783e967adae52ce1168c66382986731fa4200f19d9893b3da9b2] <==
	I1123 10:12:47.891765       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:12:47.891797       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:12:47.891806       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:12:47.969085       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:12:47.993727       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:12:47.994244       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:12:48.116324       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 10:12:48.118787       1 policy_source.go:240] refreshing policies
	I1123 10:12:48.128153       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 10:12:48.129193       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 10:12:48.129566       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:12:48.151971       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 10:12:48.152011       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 10:12:48.175694       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:12:48.300889       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:12:48.501589       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:12:49.064921       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:12:49.181857       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:12:49.251462       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:12:49.272991       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:12:49.434058       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.230.227"}
	I1123 10:12:49.452925       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.91.74"}
	I1123 10:12:51.923129       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:12:52.161016       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:12:52.361463       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [093ac2649d8d4c27fb9abf9413c73fc91911e373c30d8cfb1b331503417cbb03] <==
	I1123 10:12:51.914448       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:12:51.917109       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:12:51.920197       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 10:12:51.921253       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 10:12:51.921347       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 10:12:51.921399       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 10:12:51.921493       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 10:12:51.921522       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 10:12:51.922910       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:12:51.927599       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:12:51.930658       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:12:51.933033       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:12:51.936765       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 10:12:51.942124       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 10:12:51.947377       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:12:51.948951       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:12:51.955236       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 10:12:51.955250       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:12:51.955268       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:12:51.957050       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:12:51.957127       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:12:51.957172       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:12:51.957133       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:12:51.957176       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:12:51.961096       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	
	
	==> kube-proxy [2b35205fbca876dcf845d877fb53cf5356a2ead6e0e926f5cbe593d89e17d643] <==
	I1123 10:12:49.092883       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:12:49.290293       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:12:49.430247       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:12:49.430375       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 10:12:49.430498       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:12:49.490427       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:12:49.490541       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:12:49.496757       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:12:49.497363       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:12:49.497660       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:12:49.499144       1 config.go:200] "Starting service config controller"
	I1123 10:12:49.499203       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:12:49.499246       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:12:49.499283       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:12:49.499343       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:12:49.499379       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:12:49.500066       1 config.go:309] "Starting node config controller"
	I1123 10:12:49.500127       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:12:49.500159       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:12:49.599833       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:12:49.599868       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:12:49.599930       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d1785fb925da49928f40a36ef58b27c751da4842126c62aae26166fa662da54e] <==
	I1123 10:12:45.668328       1 serving.go:386] Generated self-signed cert in-memory
	I1123 10:12:48.870278       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:12:48.870309       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:12:48.893844       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:12:48.893951       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 10:12:48.893970       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 10:12:48.893992       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:12:48.899074       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:12:48.915732       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:12:48.900462       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:12:48.916072       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:12:48.999706       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 10:12:49.016170       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:12:49.016245       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:12:52 embed-certs-566990 kubelet[784]: I1123 10:12:52.620367     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a7bf3071-fcde-4095-a28f-fb26acf0096e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hmrpb\" (UID: \"a7bf3071-fcde-4095-a28f-fb26acf0096e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hmrpb"
	Nov 23 10:12:52 embed-certs-566990 kubelet[784]: I1123 10:12:52.620395     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fcd3568a-cefb-4a84-a9c9-b420dc9e29c2-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-gj2vw\" (UID: \"fcd3568a-cefb-4a84-a9c9-b420dc9e29c2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw"
	Nov 23 10:12:52 embed-certs-566990 kubelet[784]: I1123 10:12:52.620413     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhvmh\" (UniqueName: \"kubernetes.io/projected/fcd3568a-cefb-4a84-a9c9-b420dc9e29c2-kube-api-access-bhvmh\") pod \"dashboard-metrics-scraper-6ffb444bf9-gj2vw\" (UID: \"fcd3568a-cefb-4a84-a9c9-b420dc9e29c2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw"
	Nov 23 10:12:52 embed-certs-566990 kubelet[784]: W1123 10:12:52.865653     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/crio-31152e8e217d068321c511d88ace4be27f5c9844555c44f683a410240eefb3c5 WatchSource:0}: Error finding container 31152e8e217d068321c511d88ace4be27f5c9844555c44f683a410240eefb3c5: Status 404 returned error can't find the container with id 31152e8e217d068321c511d88ace4be27f5c9844555c44f683a410240eefb3c5
	Nov 23 10:12:52 embed-certs-566990 kubelet[784]: W1123 10:12:52.866140     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/crio-be26a42acedc37d242530b8ab4cfc6c2566de9f69ca5a466efb058c996c4db8c WatchSource:0}: Error finding container be26a42acedc37d242530b8ab4cfc6c2566de9f69ca5a466efb058c996c4db8c: Status 404 returned error can't find the container with id be26a42acedc37d242530b8ab4cfc6c2566de9f69ca5a466efb058c996c4db8c
	Nov 23 10:12:53 embed-certs-566990 kubelet[784]: I1123 10:12:53.831166     784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 10:12:57 embed-certs-566990 kubelet[784]: I1123 10:12:57.927286     784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hmrpb" podStartSLOduration=1.2531326329999999 podStartE2EDuration="5.927266829s" podCreationTimestamp="2025-11-23 10:12:52 +0000 UTC" firstStartedPulling="2025-11-23 10:12:52.870288606 +0000 UTC m=+10.399089862" lastFinishedPulling="2025-11-23 10:12:57.544422794 +0000 UTC m=+15.073224058" observedRunningTime="2025-11-23 10:12:57.927087454 +0000 UTC m=+15.455888751" watchObservedRunningTime="2025-11-23 10:12:57.927266829 +0000 UTC m=+15.456068101"
	Nov 23 10:13:02 embed-certs-566990 kubelet[784]: I1123 10:13:02.922127     784 scope.go:117] "RemoveContainer" containerID="1f61cf3a6c5ad65565b730eef186c9c82c39908b54c767c9135a525094ba5ada"
	Nov 23 10:13:03 embed-certs-566990 kubelet[784]: I1123 10:13:03.926588     784 scope.go:117] "RemoveContainer" containerID="1f61cf3a6c5ad65565b730eef186c9c82c39908b54c767c9135a525094ba5ada"
	Nov 23 10:13:03 embed-certs-566990 kubelet[784]: I1123 10:13:03.926885     784 scope.go:117] "RemoveContainer" containerID="6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d"
	Nov 23 10:13:03 embed-certs-566990 kubelet[784]: E1123 10:13:03.927035     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gj2vw_kubernetes-dashboard(fcd3568a-cefb-4a84-a9c9-b420dc9e29c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw" podUID="fcd3568a-cefb-4a84-a9c9-b420dc9e29c2"
	Nov 23 10:13:04 embed-certs-566990 kubelet[784]: I1123 10:13:04.931614     784 scope.go:117] "RemoveContainer" containerID="6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d"
	Nov 23 10:13:04 embed-certs-566990 kubelet[784]: E1123 10:13:04.931771     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gj2vw_kubernetes-dashboard(fcd3568a-cefb-4a84-a9c9-b420dc9e29c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw" podUID="fcd3568a-cefb-4a84-a9c9-b420dc9e29c2"
	Nov 23 10:13:12 embed-certs-566990 kubelet[784]: I1123 10:13:12.816570     784 scope.go:117] "RemoveContainer" containerID="6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d"
	Nov 23 10:13:12 embed-certs-566990 kubelet[784]: E1123 10:13:12.816763     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gj2vw_kubernetes-dashboard(fcd3568a-cefb-4a84-a9c9-b420dc9e29c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw" podUID="fcd3568a-cefb-4a84-a9c9-b420dc9e29c2"
	Nov 23 10:13:19 embed-certs-566990 kubelet[784]: I1123 10:13:19.968342     784 scope.go:117] "RemoveContainer" containerID="b5bff28be9cd6a59d8450e8ef4e11b37cfe957b8f2342050eeae3e5a4c182b02"
	Nov 23 10:13:26 embed-certs-566990 kubelet[784]: I1123 10:13:26.660185     784 scope.go:117] "RemoveContainer" containerID="6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d"
	Nov 23 10:13:26 embed-certs-566990 kubelet[784]: I1123 10:13:26.988677     784 scope.go:117] "RemoveContainer" containerID="6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d"
	Nov 23 10:13:26 embed-certs-566990 kubelet[784]: I1123 10:13:26.989029     784 scope.go:117] "RemoveContainer" containerID="955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e"
	Nov 23 10:13:26 embed-certs-566990 kubelet[784]: E1123 10:13:26.989192     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gj2vw_kubernetes-dashboard(fcd3568a-cefb-4a84-a9c9-b420dc9e29c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw" podUID="fcd3568a-cefb-4a84-a9c9-b420dc9e29c2"
	Nov 23 10:13:32 embed-certs-566990 kubelet[784]: I1123 10:13:32.817320     784 scope.go:117] "RemoveContainer" containerID="955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e"
	Nov 23 10:13:32 embed-certs-566990 kubelet[784]: E1123 10:13:32.819185     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gj2vw_kubernetes-dashboard(fcd3568a-cefb-4a84-a9c9-b420dc9e29c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw" podUID="fcd3568a-cefb-4a84-a9c9-b420dc9e29c2"
	Nov 23 10:13:37 embed-certs-566990 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:13:37 embed-certs-566990 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:13:37 embed-certs-566990 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cbfffd99f2e092d45a7787fa5a6e7773e4ecef4a0c16e5c9b7dd2f7c68af9e60] <==
	2025/11/23 10:12:57 Starting overwatch
	2025/11/23 10:12:57 Using namespace: kubernetes-dashboard
	2025/11/23 10:12:57 Using in-cluster config to connect to apiserver
	2025/11/23 10:12:57 Using secret token for csrf signing
	2025/11/23 10:12:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:12:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:12:57 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 10:12:57 Generating JWE encryption key
	2025/11/23 10:12:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:12:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:12:59 Initializing JWE encryption key from synchronized object
	2025/11/23 10:12:59 Creating in-cluster Sidecar client
	2025/11/23 10:12:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:12:59 Serving insecurely on HTTP port: 9090
	2025/11/23 10:13:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1f2c0a1a12843b954c961d5ac9cc2b63a6e365a430f494828ff5d31fa2951e5a] <==
	I1123 10:13:20.019109       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:13:20.046424       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:13:20.046545       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:13:20.048929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:23.503645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:27.763996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:31.362195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:34.417770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:37.440320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:37.445940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:13:37.446088       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:13:37.446258       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-566990_3a23e836-a1ae-452e-9523-e70cb3eed2ec!
	I1123 10:13:37.446302       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5390d33c-adeb-4208-bc55-623048fa6ee4", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-566990_3a23e836-a1ae-452e-9523-e70cb3eed2ec became leader
	W1123 10:13:37.454470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:37.458472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:13:37.547201       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-566990_3a23e836-a1ae-452e-9523-e70cb3eed2ec!
	W1123 10:13:39.461493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:39.468426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:41.471765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:41.476996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b5bff28be9cd6a59d8450e8ef4e11b37cfe957b8f2342050eeae3e5a4c182b02] <==
	I1123 10:12:49.009081       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:13:19.011609       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-566990 -n embed-certs-566990
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-566990 -n embed-certs-566990: exit status 2 (370.564119ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-566990 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-566990
helpers_test.go:243: (dbg) docker inspect embed-certs-566990:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086",
	        "Created": "2025-11-23T10:10:53.870240419Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 524394,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:12:34.954284543Z",
	            "FinishedAt": "2025-11-23T10:12:33.884292739Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/hostname",
	        "HostsPath": "/var/lib/docker/containers/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/hosts",
	        "LogPath": "/var/lib/docker/containers/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086-json.log",
	        "Name": "/embed-certs-566990",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-566990:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-566990",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086",
	                "LowerDir": "/var/lib/docker/overlay2/574f481259594912a40868acf264102260539315df15d075ad880cdeae35844b-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/574f481259594912a40868acf264102260539315df15d075ad880cdeae35844b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/574f481259594912a40868acf264102260539315df15d075ad880cdeae35844b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/574f481259594912a40868acf264102260539315df15d075ad880cdeae35844b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-566990",
	                "Source": "/var/lib/docker/volumes/embed-certs-566990/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-566990",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-566990",
	                "name.minikube.sigs.k8s.io": "embed-certs-566990",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "94fdc4b1873538abc15feca8061ddbee757bf29fd59ea67cebb460a41fa4dd28",
	            "SandboxKey": "/var/run/docker/netns/94fdc4b18735",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-566990": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:4d:f6:38:fe:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d564915410215420da3cf47698d0501dfe2d9ab80cfbf8100f70d4be821f6796",
	                    "EndpointID": "9ba184d42ae0fdc58acb2d3db23594717af7f362cc61057710e145ce5e8b79c8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-566990",
	                        "8f6ca1334711"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-566990 -n embed-certs-566990
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-566990 -n embed-certs-566990: exit status 2 (356.490703ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-566990 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-566990 logs -n 25: (1.249106679s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-706028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -p old-k8s-version-706028 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-020224 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ stop    │ -p no-preload-020224 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ image   │ old-k8s-version-706028 image list --format=json                                                                                                                                                                                               │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ pause   │ -p old-k8s-version-706028 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-020224 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:11 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:12 UTC │
	│ image   │ no-preload-020224 image list --format=json                                                                                                                                                                                                    │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:11 UTC │
	│ pause   │ -p no-preload-020224 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │                     │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p disable-driver-mounts-097888                                                                                                                                                                                                               │ disable-driver-mounts-097888 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-566990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │                     │
	│ stop    │ -p embed-certs-566990 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ addons  │ enable dashboard -p embed-certs-566990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-330197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-330197 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ image   │ embed-certs-566990 image list --format=json                                                                                                                                                                                                   │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ pause   │ -p embed-certs-566990 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:12:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:12:34.569376  524253 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:12:34.569984  524253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:12:34.570020  524253 out.go:374] Setting ErrFile to fd 2...
	I1123 10:12:34.570040  524253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:12:34.570356  524253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:12:34.570797  524253 out.go:368] Setting JSON to false
	I1123 10:12:34.571792  524253 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10504,"bootTime":1763882251,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:12:34.571892  524253 start.go:143] virtualization:  
	I1123 10:12:34.577581  524253 out.go:179] * [embed-certs-566990] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:12:34.580918  524253 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:12:34.581007  524253 notify.go:221] Checking for updates...
	I1123 10:12:34.585526  524253 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:12:34.588781  524253 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:12:34.591797  524253 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:12:34.595050  524253 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:12:34.598347  524253 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:12:34.601787  524253 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:34.602433  524253 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:12:34.643234  524253 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:12:34.643431  524253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:12:34.748631  524253 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:12:34.738537646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:12:34.748730  524253 docker.go:319] overlay module found
	I1123 10:12:34.752969  524253 out.go:179] * Using the docker driver based on existing profile
	I1123 10:12:34.755784  524253 start.go:309] selected driver: docker
	I1123 10:12:34.755803  524253 start.go:927] validating driver "docker" against &{Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:12:34.755920  524253 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:12:34.756610  524253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:12:34.842409  524253 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:12:34.83283756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:12:34.842749  524253 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:12:34.842785  524253 cni.go:84] Creating CNI manager for ""
	I1123 10:12:34.842845  524253 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:12:34.842895  524253 start.go:353] cluster config:
	{Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:12:34.846210  524253 out.go:179] * Starting "embed-certs-566990" primary control-plane node in "embed-certs-566990" cluster
	I1123 10:12:34.848995  524253 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:12:34.851506  524253 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:12:34.854397  524253 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:12:34.854457  524253 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:12:34.854468  524253 cache.go:65] Caching tarball of preloaded images
	I1123 10:12:34.854564  524253 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:12:34.854581  524253 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:12:34.854694  524253 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/config.json ...
	I1123 10:12:34.854920  524253 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:12:34.886994  524253 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:12:34.887012  524253 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:12:34.887026  524253 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:12:34.887058  524253 start.go:360] acquireMachinesLock for embed-certs-566990: {Name:mkc766faecda88b98c3d85f6aada2ef6121554c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:12:34.887139  524253 start.go:364] duration metric: took 39.409µs to acquireMachinesLock for "embed-certs-566990"
	I1123 10:12:34.887184  524253 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:12:34.887196  524253 fix.go:54] fixHost starting: 
	I1123 10:12:34.887460  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:34.914595  524253 fix.go:112] recreateIfNeeded on embed-certs-566990: state=Stopped err=<nil>
	W1123 10:12:34.914626  524253 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:12:34.340808  521335 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:12:34.345217  521335 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:12:34.345242  521335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:12:34.363990  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:12:34.886231  521335 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:12:34.886345  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:34.886418  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-330197 minikube.k8s.io/updated_at=2025_11_23T10_12_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=default-k8s-diff-port-330197 minikube.k8s.io/primary=true
	I1123 10:12:35.179336  521335 ops.go:34] apiserver oom_adj: -16
	I1123 10:12:35.179445  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:35.679643  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:36.179506  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:36.679562  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:37.179901  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:37.679592  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:38.179521  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:38.679597  521335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:12:38.896707  521335 kubeadm.go:1114] duration metric: took 4.010403249s to wait for elevateKubeSystemPrivileges
	I1123 10:12:38.896738  521335 kubeadm.go:403] duration metric: took 21.670318246s to StartCluster
	I1123 10:12:38.896755  521335 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:38.896813  521335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:12:38.897518  521335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:38.897743  521335 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:12:38.897850  521335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:12:38.898107  521335 config.go:182] Loaded profile config "default-k8s-diff-port-330197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:38.898096  521335 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:12:38.898216  521335 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-330197"
	I1123 10:12:38.898233  521335 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-330197"
	I1123 10:12:38.898261  521335 host.go:66] Checking if "default-k8s-diff-port-330197" exists ...
	I1123 10:12:38.898770  521335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:12:38.899069  521335 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-330197"
	I1123 10:12:38.899090  521335 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-330197"
	I1123 10:12:38.899386  521335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:12:38.900888  521335 out.go:179] * Verifying Kubernetes components...
	I1123 10:12:38.909541  521335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:38.951384  521335 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-330197"
	I1123 10:12:38.951422  521335 host.go:66] Checking if "default-k8s-diff-port-330197" exists ...
	I1123 10:12:38.951845  521335 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:12:38.967832  521335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:12:34.917675  524253 out.go:252] * Restarting existing docker container for "embed-certs-566990" ...
	I1123 10:12:34.917783  524253 cli_runner.go:164] Run: docker start embed-certs-566990
	I1123 10:12:35.293213  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:35.321485  524253 kic.go:430] container "embed-certs-566990" state is running.
	I1123 10:12:35.321878  524253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:12:35.342173  524253 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/config.json ...
	I1123 10:12:35.342405  524253 machine.go:94] provisionDockerMachine start ...
	I1123 10:12:35.342468  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:35.368208  524253 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:35.368636  524253 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33491 <nil> <nil>}
	I1123 10:12:35.368650  524253 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:12:35.369235  524253 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46444->127.0.0.1:33491: read: connection reset by peer
	I1123 10:12:38.541099  524253 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-566990
	
	I1123 10:12:38.541123  524253 ubuntu.go:182] provisioning hostname "embed-certs-566990"
	I1123 10:12:38.541252  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:38.562661  524253 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:38.562972  524253 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33491 <nil> <nil>}
	I1123 10:12:38.562990  524253 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-566990 && echo "embed-certs-566990" | sudo tee /etc/hostname
	I1123 10:12:38.731678  524253 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-566990
	
	I1123 10:12:38.731818  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:38.758549  524253 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:38.758869  524253 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33491 <nil> <nil>}
	I1123 10:12:38.758892  524253 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-566990' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-566990/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-566990' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:12:38.941390  524253 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:12:38.941475  524253 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:12:38.941514  524253 ubuntu.go:190] setting up certificates
	I1123 10:12:38.941525  524253 provision.go:84] configureAuth start
	I1123 10:12:38.941588  524253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:12:39.003538  524253 provision.go:143] copyHostCerts
	I1123 10:12:39.003616  524253 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:12:39.003633  524253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:12:39.003738  524253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:12:39.003846  524253 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:12:39.003857  524253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:12:39.003885  524253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:12:39.003943  524253 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:12:39.003953  524253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:12:39.003981  524253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:12:39.004039  524253 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.embed-certs-566990 san=[127.0.0.1 192.168.76.2 embed-certs-566990 localhost minikube]
	I1123 10:12:39.446737  524253 provision.go:177] copyRemoteCerts
	I1123 10:12:39.446803  524253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:12:39.446855  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:39.472012  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:38.971538  521335 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:12:38.971562  521335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:12:38.971625  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:39.003539  521335 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:12:39.003562  521335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:12:39.003632  521335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:12:39.074287  521335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33486 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:12:39.105620  521335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33486 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:12:39.381935  521335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:12:39.382040  521335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:12:39.498776  521335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:12:39.502445  521335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:12:39.878737  521335 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-330197" to be "Ready" ...
	I1123 10:12:39.879071  521335 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 10:12:40.394894  521335 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-330197" context rescaled to 1 replicas
	I1123 10:12:40.400579  521335 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:12:39.594174  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:12:39.631432  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:12:39.662326  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:12:39.689967  524253 provision.go:87] duration metric: took 748.419337ms to configureAuth
	I1123 10:12:39.690006  524253 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:12:39.690232  524253 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:39.690357  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:39.717852  524253 main.go:143] libmachine: Using SSH client type: native
	I1123 10:12:39.718184  524253 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33491 <nil> <nil>}
	I1123 10:12:39.718209  524253 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:12:40.197207  524253 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:12:40.197283  524253 machine.go:97] duration metric: took 4.854853123s to provisionDockerMachine
	I1123 10:12:40.197318  524253 start.go:293] postStartSetup for "embed-certs-566990" (driver="docker")
	I1123 10:12:40.197373  524253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:12:40.197590  524253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:12:40.197686  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:40.229724  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:40.352159  524253 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:12:40.358411  524253 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:12:40.358451  524253 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:12:40.358470  524253 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:12:40.358548  524253 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:12:40.358642  524253 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:12:40.358766  524253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:12:40.370891  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:12:40.411075  524253 start.go:296] duration metric: took 213.709795ms for postStartSetup
	I1123 10:12:40.411229  524253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:12:40.411293  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:40.444674  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:40.558879  524253 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:12:40.565041  524253 fix.go:56] duration metric: took 5.677837494s for fixHost
	I1123 10:12:40.565083  524253 start.go:83] releasing machines lock for "embed-certs-566990", held for 5.677926414s
	I1123 10:12:40.565157  524253 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-566990
	I1123 10:12:40.586094  524253 ssh_runner.go:195] Run: cat /version.json
	I1123 10:12:40.586160  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:40.586427  524253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:12:40.586490  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:40.607542  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:40.625364  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:40.726401  524253 ssh_runner.go:195] Run: systemctl --version
	I1123 10:12:40.873672  524253 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:12:40.924087  524253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:12:40.929741  524253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:12:40.929849  524253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:12:40.939063  524253 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:12:40.939091  524253 start.go:496] detecting cgroup driver to use...
	I1123 10:12:40.939153  524253 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:12:40.939270  524253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:12:40.961540  524253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:12:40.982960  524253 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:12:40.983075  524253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:12:41.000648  524253 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:12:41.017773  524253 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:12:41.142938  524253 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:12:41.257362  524253 docker.go:234] disabling docker service ...
	I1123 10:12:41.257447  524253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:12:41.274100  524253 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:12:41.288195  524253 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:12:41.410357  524253 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:12:41.528945  524253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:12:41.542597  524253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:12:41.557753  524253 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:12:41.557821  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.567854  524253 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:12:41.567918  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.576624  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.587089  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.597732  524253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:12:41.606995  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.616231  524253 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.624635  524253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:12:41.633642  524253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:12:41.641219  524253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:12:41.648916  524253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:41.758251  524253 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:12:41.951813  524253 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:12:41.951925  524253 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:12:41.955770  524253 start.go:564] Will wait 60s for crictl version
	I1123 10:12:41.955883  524253 ssh_runner.go:195] Run: which crictl
	I1123 10:12:41.959470  524253 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:12:41.986858  524253 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:12:41.987048  524253 ssh_runner.go:195] Run: crio --version
	I1123 10:12:42.028777  524253 ssh_runner.go:195] Run: crio --version
	I1123 10:12:42.064772  524253 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:12:40.403398  521335 addons.go:530] duration metric: took 1.505302194s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1123 10:12:41.881657  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	I1123 10:12:42.067765  524253 cli_runner.go:164] Run: docker network inspect embed-certs-566990 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:12:42.087730  524253 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:12:42.092543  524253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:12:42.104554  524253 kubeadm.go:884] updating cluster {Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:12:42.104708  524253 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:12:42.104775  524253 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:12:42.148463  524253 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:12:42.148490  524253 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:12:42.148557  524253 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:12:42.183482  524253 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:12:42.183511  524253 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:12:42.183520  524253 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:12:42.183631  524253 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-566990 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:12:42.183727  524253 ssh_runner.go:195] Run: crio config
	I1123 10:12:42.243185  524253 cni.go:84] Creating CNI manager for ""
	I1123 10:12:42.243216  524253 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:12:42.243250  524253 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:12:42.243278  524253 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-566990 NodeName:embed-certs-566990 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:12:42.243415  524253 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-566990"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:12:42.243496  524253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:12:42.253200  524253 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:12:42.253283  524253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:12:42.263475  524253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1123 10:12:42.278930  524253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:12:42.293834  524253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1123 10:12:42.308522  524253 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:12:42.318133  524253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:12:42.328690  524253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:42.450255  524253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:12:42.467317  524253 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990 for IP: 192.168.76.2
	I1123 10:12:42.467386  524253 certs.go:195] generating shared ca certs ...
	I1123 10:12:42.467417  524253 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:42.467593  524253 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:12:42.467667  524253 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:12:42.467703  524253 certs.go:257] generating profile certs ...
	I1123 10:12:42.467842  524253 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/client.key
	I1123 10:12:42.467921  524253 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key.e8338b8a
	I1123 10:12:42.468004  524253 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key
	I1123 10:12:42.468177  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:12:42.468238  524253 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:12:42.468263  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:12:42.468320  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:12:42.468371  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:12:42.468429  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:12:42.468507  524253 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:12:42.469182  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:12:42.489499  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:12:42.513609  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:12:42.531981  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:12:42.555102  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:12:42.593107  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:12:42.619088  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:12:42.638840  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/embed-certs-566990/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:12:42.664107  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:12:42.687079  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:12:42.707155  524253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:12:42.727647  524253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:12:42.741153  524253 ssh_runner.go:195] Run: openssl version
	I1123 10:12:42.747737  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:12:42.756819  524253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:12:42.761068  524253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:12:42.761172  524253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:12:42.808303  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:12:42.816294  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:12:42.824328  524253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:12:42.828098  524253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:12:42.828195  524253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:12:42.874451  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:12:42.883803  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:12:42.892299  524253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:12:42.896079  524253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:12:42.896145  524253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:12:42.937488  524253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:12:42.945495  524253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:12:42.949292  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:12:42.990640  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:12:43.033735  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:12:43.077119  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:12:43.124424  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:12:43.174673  524253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:12:43.232910  524253 kubeadm.go:401] StartCluster: {Name:embed-certs-566990 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-566990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:12:43.233002  524253 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:12:43.233065  524253 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:12:43.269122  524253 cri.go:89] found id: "34c25c168914821590128b9aa6e866de7484d016e755b1b4599ef135b1d8e798"
	I1123 10:12:43.269144  524253 cri.go:89] found id: ""
	I1123 10:12:43.269199  524253 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:12:43.282719  524253 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:12:43Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:12:43.282790  524253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:12:43.298825  524253 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:12:43.298846  524253 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:12:43.298897  524253 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:12:43.315206  524253 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:12:43.315813  524253 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-566990" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:12:43.318424  524253 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-566990" cluster setting kubeconfig missing "embed-certs-566990" context setting]
	I1123 10:12:43.319084  524253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:43.322312  524253 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:12:43.353123  524253 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:12:43.353159  524253 kubeadm.go:602] duration metric: took 54.305944ms to restartPrimaryControlPlane
	I1123 10:12:43.353169  524253 kubeadm.go:403] duration metric: took 120.268964ms to StartCluster
	I1123 10:12:43.353194  524253 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:43.353259  524253 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:12:43.354601  524253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:12:43.354822  524253 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:12:43.355113  524253 config.go:182] Loaded profile config "embed-certs-566990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:12:43.355160  524253 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:12:43.355225  524253 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-566990"
	I1123 10:12:43.355239  524253 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-566990"
	W1123 10:12:43.355250  524253 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:12:43.355270  524253 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:12:43.355693  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:43.356237  524253 addons.go:70] Setting dashboard=true in profile "embed-certs-566990"
	I1123 10:12:43.356271  524253 addons.go:239] Setting addon dashboard=true in "embed-certs-566990"
	W1123 10:12:43.356279  524253 addons.go:248] addon dashboard should already be in state true
	I1123 10:12:43.356301  524253 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:12:43.356703  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:43.359720  524253 addons.go:70] Setting default-storageclass=true in profile "embed-certs-566990"
	I1123 10:12:43.359984  524253 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-566990"
	I1123 10:12:43.360051  524253 out.go:179] * Verifying Kubernetes components...
	I1123 10:12:43.360348  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:43.367497  524253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:12:43.407800  524253 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:12:43.410904  524253 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:12:43.414371  524253 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:12:43.414380  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:12:43.414465  524253 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:12:43.414542  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:43.418448  524253 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:12:43.418496  524253 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:12:43.418571  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:43.423653  524253 addons.go:239] Setting addon default-storageclass=true in "embed-certs-566990"
	W1123 10:12:43.423687  524253 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:12:43.423712  524253 host.go:66] Checking if "embed-certs-566990" exists ...
	I1123 10:12:43.424159  524253 cli_runner.go:164] Run: docker container inspect embed-certs-566990 --format={{.State.Status}}
	I1123 10:12:43.484207  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:43.492577  524253 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:12:43.492597  524253 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:12:43.492656  524253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-566990
	I1123 10:12:43.497098  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:43.521919  524253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33491 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/embed-certs-566990/id_rsa Username:docker}
	I1123 10:12:43.738338  524253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:12:43.742349  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:12:43.742371  524253 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:12:43.752462  524253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:12:43.776705  524253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:12:43.857503  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:12:43.857585  524253 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:12:43.905189  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:12:43.905260  524253 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:12:43.939612  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:12:43.939682  524253 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:12:44.007517  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:12:44.007602  524253 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:12:44.095700  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:12:44.095770  524253 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:12:44.119812  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:12:44.119886  524253 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:12:44.140047  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:12:44.140117  524253 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:12:44.163777  524253 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:12:44.163853  524253 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:12:44.183465  524253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1123 10:12:43.882006  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:45.882054  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	I1123 10:12:48.392297  524253 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.639773581s)
	I1123 10:12:48.392374  524253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.654013377s)
	I1123 10:12:48.392342  524253 node_ready.go:35] waiting up to 6m0s for node "embed-certs-566990" to be "Ready" ...
	I1123 10:12:48.442887  524253 node_ready.go:49] node "embed-certs-566990" is "Ready"
	I1123 10:12:48.442917  524253 node_ready.go:38] duration metric: took 50.463895ms for node "embed-certs-566990" to be "Ready" ...
	I1123 10:12:48.442930  524253 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:12:48.442992  524253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:12:49.420300  524253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.643557424s)
	I1123 10:12:49.459357  524253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.275803306s)
	I1123 10:12:49.459655  524253 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.01664451s)
	I1123 10:12:49.459700  524253 api_server.go:72] duration metric: took 6.104846088s to wait for apiserver process to appear ...
	I1123 10:12:49.459721  524253 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:12:49.459752  524253 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:12:49.462839  524253 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-566990 addons enable metrics-server
	
	I1123 10:12:49.465769  524253 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1123 10:12:49.468747  524253 addons.go:530] duration metric: took 6.113576404s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1123 10:12:49.477782  524253 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:12:49.477824  524253 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:12:47.883357  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:50.381379  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:52.381750  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	I1123 10:12:49.960515  524253 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:12:49.968505  524253 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:12:49.969616  524253 api_server.go:141] control plane version: v1.34.1
	I1123 10:12:49.969687  524253 api_server.go:131] duration metric: took 509.945698ms to wait for apiserver health ...
	I1123 10:12:49.969704  524253 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:12:49.973092  524253 system_pods.go:59] 8 kube-system pods found
	I1123 10:12:49.973138  524253 system_pods.go:61] "coredns-66bc5c9577-d8sh7" [737943ee-552c-4a07-aa55-978b687c5b59] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:12:49.973188  524253 system_pods.go:61] "etcd-embed-certs-566990" [020c43bd-55e0-40c2-8119-1370611def91] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:12:49.973195  524253 system_pods.go:61] "kindnet-p6kh4" [0047dc25-c013-471f-89b6-22e1399e2dc9] Running
	I1123 10:12:49.973210  524253 system_pods.go:61] "kube-apiserver-embed-certs-566990" [cdc3c57e-a09e-45f0-85f3-865174df4118] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:12:49.973219  524253 system_pods.go:61] "kube-controller-manager-embed-certs-566990" [8057de0a-12ee-4d41-8535-b1b4db1c022e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:12:49.973245  524253 system_pods.go:61] "kube-proxy-k4lvf" [88d44863-5a0e-44f5-9806-2e6e769dc05b] Running
	I1123 10:12:49.973262  524253 system_pods.go:61] "kube-scheduler-embed-certs-566990" [63997f1e-1056-4acd-a564-a8fddff7356f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:12:49.973278  524253 system_pods.go:61] "storage-provisioner" [9f1e25da-6804-44f0-aa70-5ff52015cd12] Running
	I1123 10:12:49.973292  524253 system_pods.go:74] duration metric: took 3.57404ms to wait for pod list to return data ...
	I1123 10:12:49.973301  524253 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:12:49.976664  524253 default_sa.go:45] found service account: "default"
	I1123 10:12:49.976693  524253 default_sa.go:55] duration metric: took 3.382866ms for default service account to be created ...
	I1123 10:12:49.976704  524253 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:12:49.982365  524253 system_pods.go:86] 8 kube-system pods found
	I1123 10:12:49.982407  524253 system_pods.go:89] "coredns-66bc5c9577-d8sh7" [737943ee-552c-4a07-aa55-978b687c5b59] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:12:49.982451  524253 system_pods.go:89] "etcd-embed-certs-566990" [020c43bd-55e0-40c2-8119-1370611def91] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:12:49.982468  524253 system_pods.go:89] "kindnet-p6kh4" [0047dc25-c013-471f-89b6-22e1399e2dc9] Running
	I1123 10:12:49.982476  524253 system_pods.go:89] "kube-apiserver-embed-certs-566990" [cdc3c57e-a09e-45f0-85f3-865174df4118] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:12:49.982497  524253 system_pods.go:89] "kube-controller-manager-embed-certs-566990" [8057de0a-12ee-4d41-8535-b1b4db1c022e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:12:49.982517  524253 system_pods.go:89] "kube-proxy-k4lvf" [88d44863-5a0e-44f5-9806-2e6e769dc05b] Running
	I1123 10:12:49.982534  524253 system_pods.go:89] "kube-scheduler-embed-certs-566990" [63997f1e-1056-4acd-a564-a8fddff7356f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:12:49.982539  524253 system_pods.go:89] "storage-provisioner" [9f1e25da-6804-44f0-aa70-5ff52015cd12] Running
	I1123 10:12:49.982558  524253 system_pods.go:126] duration metric: took 5.847799ms to wait for k8s-apps to be running ...
	I1123 10:12:49.982573  524253 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:12:49.982651  524253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:12:50.008979  524253 system_svc.go:56] duration metric: took 26.393587ms WaitForService to wait for kubelet
	I1123 10:12:50.009017  524253 kubeadm.go:587] duration metric: took 6.654160393s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:12:50.009064  524253 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:12:50.021985  524253 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:12:50.022022  524253 node_conditions.go:123] node cpu capacity is 2
	I1123 10:12:50.022037  524253 node_conditions.go:105] duration metric: took 12.963649ms to run NodePressure ...
	I1123 10:12:50.022074  524253 start.go:242] waiting for startup goroutines ...
	I1123 10:12:50.022089  524253 start.go:247] waiting for cluster config update ...
	I1123 10:12:50.022101  524253 start.go:256] writing updated cluster config ...
	I1123 10:12:50.022423  524253 ssh_runner.go:195] Run: rm -f paused
	I1123 10:12:50.028023  524253 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:12:50.036471  524253 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d8sh7" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 10:12:52.042385  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:12:54.043234  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:12:54.382172  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:56.382957  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:12:56.543953  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:12:58.544260  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:12:58.882752  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:01.382144  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:01.043682  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:03.542248  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:03.881887  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:06.381622  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:05.543504  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:08.041974  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:08.881893  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:10.882009  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:10.042479  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:12.542369  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:13.382245  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:15.882066  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:15.045187  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:17.542973  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:17.883543  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	W1123 10:13:20.381661  521335 node_ready.go:57] node "default-k8s-diff-port-330197" has "Ready":"False" status (will retry)
	I1123 10:13:21.382065  521335 node_ready.go:49] node "default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:21.382097  521335 node_ready.go:38] duration metric: took 41.503320364s for node "default-k8s-diff-port-330197" to be "Ready" ...
	I1123 10:13:21.382111  521335 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:13:21.382174  521335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:13:21.396972  521335 api_server.go:72] duration metric: took 42.499187505s to wait for apiserver process to appear ...
	I1123 10:13:21.397004  521335 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:13:21.397025  521335 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 10:13:21.412797  521335 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 10:13:21.415245  521335 api_server.go:141] control plane version: v1.34.1
	I1123 10:13:21.415282  521335 api_server.go:131] duration metric: took 18.270424ms to wait for apiserver health ...
	I1123 10:13:21.415291  521335 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:13:21.427092  521335 system_pods.go:59] 8 kube-system pods found
	I1123 10:13:21.427135  521335 system_pods.go:61] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:13:21.427143  521335 system_pods.go:61] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running
	I1123 10:13:21.427149  521335 system_pods.go:61] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:13:21.427161  521335 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running
	I1123 10:13:21.427166  521335 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running
	I1123 10:13:21.427171  521335 system_pods.go:61] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:13:21.427175  521335 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running
	I1123 10:13:21.427181  521335 system_pods.go:61] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:13:21.427194  521335 system_pods.go:74] duration metric: took 11.896873ms to wait for pod list to return data ...
	I1123 10:13:21.427202  521335 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:13:21.430678  521335 default_sa.go:45] found service account: "default"
	I1123 10:13:21.430707  521335 default_sa.go:55] duration metric: took 3.491814ms for default service account to be created ...
	I1123 10:13:21.430727  521335 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:13:21.436198  521335 system_pods.go:86] 8 kube-system pods found
	I1123 10:13:21.436235  521335 system_pods.go:89] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:13:21.436243  521335 system_pods.go:89] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running
	I1123 10:13:21.436259  521335 system_pods.go:89] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:13:21.436265  521335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running
	I1123 10:13:21.436270  521335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running
	I1123 10:13:21.436277  521335 system_pods.go:89] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:13:21.436281  521335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running
	I1123 10:13:21.436287  521335 system_pods.go:89] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:13:21.436316  521335 retry.go:31] will retry after 267.688988ms: missing components: kube-dns
	I1123 10:13:21.719855  521335 system_pods.go:86] 8 kube-system pods found
	I1123 10:13:21.719901  521335 system_pods.go:89] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:13:21.719909  521335 system_pods.go:89] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running
	I1123 10:13:21.719916  521335 system_pods.go:89] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:13:21.719920  521335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running
	I1123 10:13:21.719925  521335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running
	I1123 10:13:21.719930  521335 system_pods.go:89] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:13:21.719934  521335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running
	I1123 10:13:21.719954  521335 system_pods.go:89] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:13:21.719971  521335 retry.go:31] will retry after 299.519958ms: missing components: kube-dns
	I1123 10:13:22.024526  521335 system_pods.go:86] 8 kube-system pods found
	I1123 10:13:22.024578  521335 system_pods.go:89] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Running
	I1123 10:13:22.024597  521335 system_pods.go:89] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running
	I1123 10:13:22.024602  521335 system_pods.go:89] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:13:22.024617  521335 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running
	I1123 10:13:22.024621  521335 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running
	I1123 10:13:22.024626  521335 system_pods.go:89] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:13:22.024630  521335 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running
	I1123 10:13:22.024635  521335 system_pods.go:89] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Running
	I1123 10:13:22.024643  521335 system_pods.go:126] duration metric: took 593.910164ms to wait for k8s-apps to be running ...
	I1123 10:13:22.024651  521335 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:13:22.024744  521335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:13:22.044406  521335 system_svc.go:56] duration metric: took 19.746115ms WaitForService to wait for kubelet
	I1123 10:13:22.044435  521335 kubeadm.go:587] duration metric: took 43.146660616s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:13:22.044455  521335 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:13:22.047630  521335 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:13:22.047663  521335 node_conditions.go:123] node cpu capacity is 2
	I1123 10:13:22.047677  521335 node_conditions.go:105] duration metric: took 3.217127ms to run NodePressure ...
	I1123 10:13:22.047694  521335 start.go:242] waiting for startup goroutines ...
	I1123 10:13:22.047702  521335 start.go:247] waiting for cluster config update ...
	I1123 10:13:22.047713  521335 start.go:256] writing updated cluster config ...
	I1123 10:13:22.048016  521335 ssh_runner.go:195] Run: rm -f paused
	I1123 10:13:22.052242  521335 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:13:22.056674  521335 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pphv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.062088  521335 pod_ready.go:94] pod "coredns-66bc5c9577-pphv6" is "Ready"
	I1123 10:13:22.062114  521335 pod_ready.go:86] duration metric: took 5.372157ms for pod "coredns-66bc5c9577-pphv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.064742  521335 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.069906  521335 pod_ready.go:94] pod "etcd-default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:22.069933  521335 pod_ready.go:86] duration metric: took 5.16526ms for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.072948  521335 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.078059  521335 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:22.078084  521335 pod_ready.go:86] duration metric: took 5.111442ms for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.080881  521335 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.457153  521335 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:22.457182  521335 pod_ready.go:86] duration metric: took 376.276326ms for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:22.657820  521335 pod_ready.go:83] waiting for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:23.057150  521335 pod_ready.go:94] pod "kube-proxy-75qqt" is "Ready"
	I1123 10:13:23.057200  521335 pod_ready.go:86] duration metric: took 399.352644ms for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:23.257221  521335 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:23.657699  521335 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-330197" is "Ready"
	I1123 10:13:23.657728  521335 pod_ready.go:86] duration metric: took 400.478199ms for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:23.657742  521335 pod_ready.go:40] duration metric: took 1.605465474s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:13:23.714769  521335 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:13:23.718401  521335 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-330197" cluster and "default" namespace by default
	W1123 10:13:20.044347  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	W1123 10:13:22.542279  524253 pod_ready.go:104] pod "coredns-66bc5c9577-d8sh7" is not "Ready", error: <nil>
	I1123 10:13:24.041853  524253 pod_ready.go:94] pod "coredns-66bc5c9577-d8sh7" is "Ready"
	I1123 10:13:24.041901  524253 pod_ready.go:86] duration metric: took 34.005401362s for pod "coredns-66bc5c9577-d8sh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.044925  524253 pod_ready.go:83] waiting for pod "etcd-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.049926  524253 pod_ready.go:94] pod "etcd-embed-certs-566990" is "Ready"
	I1123 10:13:24.049956  524253 pod_ready.go:86] duration metric: took 5.009345ms for pod "etcd-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.052293  524253 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.057957  524253 pod_ready.go:94] pod "kube-apiserver-embed-certs-566990" is "Ready"
	I1123 10:13:24.057987  524253 pod_ready.go:86] duration metric: took 5.668112ms for pod "kube-apiserver-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.060529  524253 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.240980  524253 pod_ready.go:94] pod "kube-controller-manager-embed-certs-566990" is "Ready"
	I1123 10:13:24.241007  524253 pod_ready.go:86] duration metric: took 180.45021ms for pod "kube-controller-manager-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.440356  524253 pod_ready.go:83] waiting for pod "kube-proxy-k4lvf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:24.840001  524253 pod_ready.go:94] pod "kube-proxy-k4lvf" is "Ready"
	I1123 10:13:24.840031  524253 pod_ready.go:86] duration metric: took 399.646726ms for pod "kube-proxy-k4lvf" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:25.040159  524253 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:25.439745  524253 pod_ready.go:94] pod "kube-scheduler-embed-certs-566990" is "Ready"
	I1123 10:13:25.439815  524253 pod_ready.go:86] duration metric: took 399.627786ms for pod "kube-scheduler-embed-certs-566990" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:13:25.439837  524253 pod_ready.go:40] duration metric: took 35.411776765s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:13:25.495301  524253 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:13:25.498797  524253 out.go:179] * Done! kubectl is now configured to use "embed-certs-566990" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.663624449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.683638933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.684202306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.699922362Z" level=info msg="Created container 955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw/dashboard-metrics-scraper" id=fddbd392-8f19-4dc1-abc3-aba7e3083da9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.702369163Z" level=info msg="Starting container: 955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e" id=987315f1-22d1-4479-8a9a-4bb82b39ef35 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.705462989Z" level=info msg="Started container" PID=1670 containerID=955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw/dashboard-metrics-scraper id=987315f1-22d1-4479-8a9a-4bb82b39ef35 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31152e8e217d068321c511d88ace4be27f5c9844555c44f683a410240eefb3c5
	Nov 23 10:13:26 embed-certs-566990 conmon[1668]: conmon 955185be0a8e3482f73c <ninfo>: container 1670 exited with status 1
	Nov 23 10:13:26 embed-certs-566990 crio[655]: time="2025-11-23T10:13:26.990815333Z" level=info msg="Removing container: 6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d" id=7fabb1f8-fa63-48b8-ba6d-063eceb0db11 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:13:27 embed-certs-566990 crio[655]: time="2025-11-23T10:13:27.003142998Z" level=info msg="Error loading conmon cgroup of container 6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d: cgroup deleted" id=7fabb1f8-fa63-48b8-ba6d-063eceb0db11 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:13:27 embed-certs-566990 crio[655]: time="2025-11-23T10:13:27.017182281Z" level=info msg="Removed container 6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw/dashboard-metrics-scraper" id=7fabb1f8-fa63-48b8-ba6d-063eceb0db11 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.176923549Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.180651732Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.180687737Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.180710055Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.184403522Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.18443471Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.184459703Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.187649777Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.187684444Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.187707928Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.190910785Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.190944968Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.190968435Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.194089495Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:13:29 embed-certs-566990 crio[655]: time="2025-11-23T10:13:29.194127379Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	955185be0a8e3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   31152e8e217d0       dashboard-metrics-scraper-6ffb444bf9-gj2vw   kubernetes-dashboard
	1f2c0a1a12843       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago      Running             storage-provisioner         2                   644fa530760f5       storage-provisioner                          kube-system
	cbfffd99f2e09       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago      Running             kubernetes-dashboard        0                   be26a42acedc3       kubernetes-dashboard-855c9754f9-hmrpb        kubernetes-dashboard
	19dc4b8d2d9db       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago      Running             coredns                     1                   46e4b4ba1ad98       coredns-66bc5c9577-d8sh7                     kube-system
	f6b85f94b8d9f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago      Running             kindnet-cni                 1                   367609e92d6d8       kindnet-p6kh4                                kube-system
	1ebd1454aec7a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago      Running             busybox                     1                   fa5d6ef527fc7       busybox                                      default
	b5bff28be9cd6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago      Exited              storage-provisioner         1                   644fa530760f5       storage-provisioner                          kube-system
	2b35205fbca87       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago      Running             kube-proxy                  1                   69af23f1ac06d       kube-proxy-k4lvf                             kube-system
	093ac2649d8d4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   10f70cb1d509c       kube-controller-manager-embed-certs-566990   kube-system
	d1785fb925da4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   9426b21756a0a       kube-scheduler-embed-certs-566990            kube-system
	34c25c1689148       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   4e244f53a0441       etcd-embed-certs-566990                      kube-system
	f29cb2a59da87       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   ecb076407c2bd       kube-apiserver-embed-certs-566990            kube-system
	
	
	==> coredns [19dc4b8d2d9db97e17ff50ea3872f7c8f26c53f8c48c68cbd62ab46f6229554a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60564 - 44673 "HINFO IN 5131750717403680825.5759646936207733108. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026574495s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-566990
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-566990
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=embed-certs-566990
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_11_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:11:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-566990
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:13:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:13:29 +0000   Sun, 23 Nov 2025 10:11:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:13:29 +0000   Sun, 23 Nov 2025 10:11:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:13:29 +0000   Sun, 23 Nov 2025 10:11:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:13:29 +0000   Sun, 23 Nov 2025 10:12:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-566990
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                7626cdea-55dc-447c-9203-313e96141bd6
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-d8sh7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-embed-certs-566990                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-p6kh4                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-embed-certs-566990             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-embed-certs-566990    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-k4lvf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-embed-certs-566990             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gj2vw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hmrpb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s (x8 over 2m31s)  kubelet          Node embed-certs-566990 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x8 over 2m31s)  kubelet          Node embed-certs-566990 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x8 over 2m31s)  kubelet          Node embed-certs-566990 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node embed-certs-566990 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node embed-certs-566990 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node embed-certs-566990 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s                  node-controller  Node embed-certs-566990 event: Registered Node embed-certs-566990 in Controller
	  Normal   NodeReady                96s                    kubelet          Node embed-certs-566990 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node embed-certs-566990 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node embed-certs-566990 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node embed-certs-566990 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node embed-certs-566990 event: Registered Node embed-certs-566990 in Controller
	
	
	==> dmesg <==
	[ +14.190024] overlayfs: idmapped layers are currently not supported
	[Nov23 09:49] overlayfs: idmapped layers are currently not supported
	[Nov23 09:50] overlayfs: idmapped layers are currently not supported
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	[Nov23 10:12] overlayfs: idmapped layers are currently not supported
	[ +16.378417] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [34c25c168914821590128b9aa6e866de7484d016e755b1b4599ef135b1d8e798] <==
	{"level":"warn","ts":"2025-11-23T10:12:46.089114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.105130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.130257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.146448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.168739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.185958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.196409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.221146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.240085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.262871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.287179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.296273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.317246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.337492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.361717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.366716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.385383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.424532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.464229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.467665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.468954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.514740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.546161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.558072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:12:46.697069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50298","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:13:43 up  2:56,  0 user,  load average: 4.69, 4.57, 3.62
	Linux embed-certs-566990 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f6b85f94b8d9fb08196e9f8bebc066233445b88b74d7b58a3b7d49897d952cb5] <==
	I1123 10:12:48.984896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:12:48.985170       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:12:48.985301       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:12:48.985319       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:12:48.985329       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:12:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:12:49.176090       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:12:49.176108       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:12:49.176117       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:12:49.176393       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:13:19.175885       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 10:13:19.176810       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:13:19.176889       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 10:13:19.176922       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 10:13:20.577230       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:13:20.577261       1 metrics.go:72] Registering metrics
	I1123 10:13:20.577316       1 controller.go:711] "Syncing nftables rules"
	I1123 10:13:29.176600       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:13:29.176656       1 main.go:301] handling current node
	I1123 10:13:39.177547       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:13:39.177591       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f29cb2a59da8783e967adae52ce1168c66382986731fa4200f19d9893b3da9b2] <==
	I1123 10:12:47.891765       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:12:47.891797       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:12:47.891806       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:12:47.969085       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:12:47.993727       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:12:47.994244       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:12:48.116324       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 10:12:48.118787       1 policy_source.go:240] refreshing policies
	I1123 10:12:48.128153       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 10:12:48.129193       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 10:12:48.129566       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:12:48.151971       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 10:12:48.152011       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 10:12:48.175694       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:12:48.300889       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:12:48.501589       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:12:49.064921       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:12:49.181857       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:12:49.251462       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:12:49.272991       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:12:49.434058       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.230.227"}
	I1123 10:12:49.452925       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.91.74"}
	I1123 10:12:51.923129       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:12:52.161016       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:12:52.361463       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [093ac2649d8d4c27fb9abf9413c73fc91911e373c30d8cfb1b331503417cbb03] <==
	I1123 10:12:51.914448       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:12:51.917109       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:12:51.920197       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 10:12:51.921253       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 10:12:51.921347       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 10:12:51.921399       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 10:12:51.921493       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 10:12:51.921522       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 10:12:51.922910       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:12:51.927599       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:12:51.930658       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:12:51.933033       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:12:51.936765       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 10:12:51.942124       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 10:12:51.947377       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:12:51.948951       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:12:51.955236       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 10:12:51.955250       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:12:51.955268       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:12:51.957050       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:12:51.957127       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:12:51.957172       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:12:51.957133       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:12:51.957176       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:12:51.961096       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	
	
	==> kube-proxy [2b35205fbca876dcf845d877fb53cf5356a2ead6e0e926f5cbe593d89e17d643] <==
	I1123 10:12:49.092883       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:12:49.290293       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:12:49.430247       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:12:49.430375       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 10:12:49.430498       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:12:49.490427       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:12:49.490541       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:12:49.496757       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:12:49.497363       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:12:49.497660       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:12:49.499144       1 config.go:200] "Starting service config controller"
	I1123 10:12:49.499203       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:12:49.499246       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:12:49.499283       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:12:49.499343       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:12:49.499379       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:12:49.500066       1 config.go:309] "Starting node config controller"
	I1123 10:12:49.500127       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:12:49.500159       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:12:49.599833       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:12:49.599868       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:12:49.599930       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d1785fb925da49928f40a36ef58b27c751da4842126c62aae26166fa662da54e] <==
	I1123 10:12:45.668328       1 serving.go:386] Generated self-signed cert in-memory
	I1123 10:12:48.870278       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:12:48.870309       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:12:48.893844       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:12:48.893951       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 10:12:48.893970       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 10:12:48.893992       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:12:48.899074       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:12:48.915732       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:12:48.900462       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:12:48.916072       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:12:48.999706       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 10:12:49.016170       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:12:49.016245       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:12:52 embed-certs-566990 kubelet[784]: I1123 10:12:52.620367     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a7bf3071-fcde-4095-a28f-fb26acf0096e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hmrpb\" (UID: \"a7bf3071-fcde-4095-a28f-fb26acf0096e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hmrpb"
	Nov 23 10:12:52 embed-certs-566990 kubelet[784]: I1123 10:12:52.620395     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fcd3568a-cefb-4a84-a9c9-b420dc9e29c2-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-gj2vw\" (UID: \"fcd3568a-cefb-4a84-a9c9-b420dc9e29c2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw"
	Nov 23 10:12:52 embed-certs-566990 kubelet[784]: I1123 10:12:52.620413     784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhvmh\" (UniqueName: \"kubernetes.io/projected/fcd3568a-cefb-4a84-a9c9-b420dc9e29c2-kube-api-access-bhvmh\") pod \"dashboard-metrics-scraper-6ffb444bf9-gj2vw\" (UID: \"fcd3568a-cefb-4a84-a9c9-b420dc9e29c2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw"
	Nov 23 10:12:52 embed-certs-566990 kubelet[784]: W1123 10:12:52.865653     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/crio-31152e8e217d068321c511d88ace4be27f5c9844555c44f683a410240eefb3c5 WatchSource:0}: Error finding container 31152e8e217d068321c511d88ace4be27f5c9844555c44f683a410240eefb3c5: Status 404 returned error can't find the container with id 31152e8e217d068321c511d88ace4be27f5c9844555c44f683a410240eefb3c5
	Nov 23 10:12:52 embed-certs-566990 kubelet[784]: W1123 10:12:52.866140     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8f6ca1334711c4101212603f0c25b1165ed98786c961b0ef252e6c8783482086/crio-be26a42acedc37d242530b8ab4cfc6c2566de9f69ca5a466efb058c996c4db8c WatchSource:0}: Error finding container be26a42acedc37d242530b8ab4cfc6c2566de9f69ca5a466efb058c996c4db8c: Status 404 returned error can't find the container with id be26a42acedc37d242530b8ab4cfc6c2566de9f69ca5a466efb058c996c4db8c
	Nov 23 10:12:53 embed-certs-566990 kubelet[784]: I1123 10:12:53.831166     784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 10:12:57 embed-certs-566990 kubelet[784]: I1123 10:12:57.927286     784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hmrpb" podStartSLOduration=1.2531326329999999 podStartE2EDuration="5.927266829s" podCreationTimestamp="2025-11-23 10:12:52 +0000 UTC" firstStartedPulling="2025-11-23 10:12:52.870288606 +0000 UTC m=+10.399089862" lastFinishedPulling="2025-11-23 10:12:57.544422794 +0000 UTC m=+15.073224058" observedRunningTime="2025-11-23 10:12:57.927087454 +0000 UTC m=+15.455888751" watchObservedRunningTime="2025-11-23 10:12:57.927266829 +0000 UTC m=+15.456068101"
	Nov 23 10:13:02 embed-certs-566990 kubelet[784]: I1123 10:13:02.922127     784 scope.go:117] "RemoveContainer" containerID="1f61cf3a6c5ad65565b730eef186c9c82c39908b54c767c9135a525094ba5ada"
	Nov 23 10:13:03 embed-certs-566990 kubelet[784]: I1123 10:13:03.926588     784 scope.go:117] "RemoveContainer" containerID="1f61cf3a6c5ad65565b730eef186c9c82c39908b54c767c9135a525094ba5ada"
	Nov 23 10:13:03 embed-certs-566990 kubelet[784]: I1123 10:13:03.926885     784 scope.go:117] "RemoveContainer" containerID="6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d"
	Nov 23 10:13:03 embed-certs-566990 kubelet[784]: E1123 10:13:03.927035     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gj2vw_kubernetes-dashboard(fcd3568a-cefb-4a84-a9c9-b420dc9e29c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw" podUID="fcd3568a-cefb-4a84-a9c9-b420dc9e29c2"
	Nov 23 10:13:04 embed-certs-566990 kubelet[784]: I1123 10:13:04.931614     784 scope.go:117] "RemoveContainer" containerID="6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d"
	Nov 23 10:13:04 embed-certs-566990 kubelet[784]: E1123 10:13:04.931771     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gj2vw_kubernetes-dashboard(fcd3568a-cefb-4a84-a9c9-b420dc9e29c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw" podUID="fcd3568a-cefb-4a84-a9c9-b420dc9e29c2"
	Nov 23 10:13:12 embed-certs-566990 kubelet[784]: I1123 10:13:12.816570     784 scope.go:117] "RemoveContainer" containerID="6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d"
	Nov 23 10:13:12 embed-certs-566990 kubelet[784]: E1123 10:13:12.816763     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gj2vw_kubernetes-dashboard(fcd3568a-cefb-4a84-a9c9-b420dc9e29c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw" podUID="fcd3568a-cefb-4a84-a9c9-b420dc9e29c2"
	Nov 23 10:13:19 embed-certs-566990 kubelet[784]: I1123 10:13:19.968342     784 scope.go:117] "RemoveContainer" containerID="b5bff28be9cd6a59d8450e8ef4e11b37cfe957b8f2342050eeae3e5a4c182b02"
	Nov 23 10:13:26 embed-certs-566990 kubelet[784]: I1123 10:13:26.660185     784 scope.go:117] "RemoveContainer" containerID="6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d"
	Nov 23 10:13:26 embed-certs-566990 kubelet[784]: I1123 10:13:26.988677     784 scope.go:117] "RemoveContainer" containerID="6cbb2955381eb442dc997176a519320f443a3d0f400499f6c07adc40e030e59d"
	Nov 23 10:13:26 embed-certs-566990 kubelet[784]: I1123 10:13:26.989029     784 scope.go:117] "RemoveContainer" containerID="955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e"
	Nov 23 10:13:26 embed-certs-566990 kubelet[784]: E1123 10:13:26.989192     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gj2vw_kubernetes-dashboard(fcd3568a-cefb-4a84-a9c9-b420dc9e29c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw" podUID="fcd3568a-cefb-4a84-a9c9-b420dc9e29c2"
	Nov 23 10:13:32 embed-certs-566990 kubelet[784]: I1123 10:13:32.817320     784 scope.go:117] "RemoveContainer" containerID="955185be0a8e3482f73c38cb4aead784358d9023b8b6180ccd3cf62d25134e1e"
	Nov 23 10:13:32 embed-certs-566990 kubelet[784]: E1123 10:13:32.819185     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gj2vw_kubernetes-dashboard(fcd3568a-cefb-4a84-a9c9-b420dc9e29c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gj2vw" podUID="fcd3568a-cefb-4a84-a9c9-b420dc9e29c2"
	Nov 23 10:13:37 embed-certs-566990 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:13:37 embed-certs-566990 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:13:37 embed-certs-566990 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cbfffd99f2e092d45a7787fa5a6e7773e4ecef4a0c16e5c9b7dd2f7c68af9e60] <==
	2025/11/23 10:12:57 Starting overwatch
	2025/11/23 10:12:57 Using namespace: kubernetes-dashboard
	2025/11/23 10:12:57 Using in-cluster config to connect to apiserver
	2025/11/23 10:12:57 Using secret token for csrf signing
	2025/11/23 10:12:57 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:12:57 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:12:57 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 10:12:57 Generating JWE encryption key
	2025/11/23 10:12:57 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:12:57 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:12:59 Initializing JWE encryption key from synchronized object
	2025/11/23 10:12:59 Creating in-cluster Sidecar client
	2025/11/23 10:12:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:12:59 Serving insecurely on HTTP port: 9090
	2025/11/23 10:13:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1f2c0a1a12843b954c961d5ac9cc2b63a6e365a430f494828ff5d31fa2951e5a] <==
	I1123 10:13:20.019109       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:13:20.046424       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:13:20.046545       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:13:20.048929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:23.503645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:27.763996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:31.362195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:34.417770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:37.440320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:37.445940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:13:37.446088       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:13:37.446258       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-566990_3a23e836-a1ae-452e-9523-e70cb3eed2ec!
	I1123 10:13:37.446302       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5390d33c-adeb-4208-bc55-623048fa6ee4", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-566990_3a23e836-a1ae-452e-9523-e70cb3eed2ec became leader
	W1123 10:13:37.454470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:37.458472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:13:37.547201       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-566990_3a23e836-a1ae-452e-9523-e70cb3eed2ec!
	W1123 10:13:39.461493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:39.468426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:41.471765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:41.476996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:43.479687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:13:43.487150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b5bff28be9cd6a59d8450e8ef4e11b37cfe957b8f2342050eeae3e5a4c182b02] <==
	I1123 10:12:49.009081       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:13:19.011609       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-566990 -n embed-certs-566990
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-566990 -n embed-certs-566990: exit status 2 (354.208498ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-566990 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-499584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-499584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (291.687289ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:14:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-499584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-499584
helpers_test.go:243: (dbg) docker inspect newest-cni-499584:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5",
	        "Created": "2025-11-23T10:13:53.150463538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 530201,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:13:53.22197888Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5/hosts",
	        "LogPath": "/var/lib/docker/containers/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5-json.log",
	        "Name": "/newest-cni-499584",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-499584:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-499584",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5",
	                "LowerDir": "/var/lib/docker/overlay2/0ddfc19f10ceb1d022289b3c3394eb4fa72b02f60299c24da29cbd9e3855f5fb-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0ddfc19f10ceb1d022289b3c3394eb4fa72b02f60299c24da29cbd9e3855f5fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0ddfc19f10ceb1d022289b3c3394eb4fa72b02f60299c24da29cbd9e3855f5fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0ddfc19f10ceb1d022289b3c3394eb4fa72b02f60299c24da29cbd9e3855f5fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-499584",
	                "Source": "/var/lib/docker/volumes/newest-cni-499584/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-499584",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-499584",
	                "name.minikube.sigs.k8s.io": "newest-cni-499584",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8eff7b5e00d43b1b971e846773893d9c407c9ea729663e2201fca2352496236d",
	            "SandboxKey": "/var/run/docker/netns/8eff7b5e00d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-499584": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:0e:d6:8e:8b:aa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e27c561c33f1c11e6ad07d3f525986a08d52d1b7909a984158deea3644563840",
	                    "EndpointID": "061d60edc145d5312b403040cc5658eec1a476cb5483e7dea946e570ebcaef5c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-499584",
	                        "e79d7d886da1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499584 -n newest-cni-499584
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-499584 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-499584 logs -n 25: (1.144374733s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-020224 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:11 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ delete  │ -p old-k8s-version-706028                                                                                                                                                                                                                     │ old-k8s-version-706028       │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:10 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:10 UTC │ 23 Nov 25 10:12 UTC │
	│ image   │ no-preload-020224 image list --format=json                                                                                                                                                                                                    │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:11 UTC │
	│ pause   │ -p no-preload-020224 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │                     │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p disable-driver-mounts-097888                                                                                                                                                                                                               │ disable-driver-mounts-097888 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-566990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │                     │
	│ stop    │ -p embed-certs-566990 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ addons  │ enable dashboard -p embed-certs-566990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-330197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-330197 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ image   │ embed-certs-566990 image list --format=json                                                                                                                                                                                                   │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ pause   │ -p embed-certs-566990 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ delete  │ -p embed-certs-566990                                                                                                                                                                                                                         │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ delete  │ -p embed-certs-566990                                                                                                                                                                                                                         │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ start   │ -p newest-cni-499584 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-330197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-499584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:13:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:13:48.217656  529379 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:13:48.217788  529379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:13:48.217800  529379 out.go:374] Setting ErrFile to fd 2...
	I1123 10:13:48.217805  529379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:13:48.218212  529379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:13:48.219143  529379 out.go:368] Setting JSON to false
	I1123 10:13:48.219975  529379 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10577,"bootTime":1763882251,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:13:48.220111  529379 start.go:143] virtualization:  
	I1123 10:13:48.225114  529379 out.go:179] * [default-k8s-diff-port-330197] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:13:48.228265  529379 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:13:48.228343  529379 notify.go:221] Checking for updates...
	I1123 10:13:48.235930  529379 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:13:48.242497  529379 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:13:48.246413  529379 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:13:48.249498  529379 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:13:48.252327  529379 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:13:48.255752  529379 config.go:182] Loaded profile config "default-k8s-diff-port-330197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:13:48.256329  529379 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:13:48.308305  529379 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:13:48.308506  529379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:13:48.394564  529379 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-23 10:13:48.384956347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:13:48.394665  529379 docker.go:319] overlay module found
	I1123 10:13:48.401592  529379 out.go:179] * Using the docker driver based on existing profile
	I1123 10:13:48.404608  529379 start.go:309] selected driver: docker
	I1123 10:13:48.404628  529379 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-330197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-330197 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:13:48.404743  529379 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:13:48.405370  529379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:13:48.515336  529379 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:59 SystemTime:2025-11-23 10:13:48.502762843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:13:48.515699  529379 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:13:48.515724  529379 cni.go:84] Creating CNI manager for ""
	I1123 10:13:48.515774  529379 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:13:48.515810  529379 start.go:353] cluster config:
	{Name:default-k8s-diff-port-330197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-330197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:13:48.520976  529379 out.go:179] * Starting "default-k8s-diff-port-330197" primary control-plane node in "default-k8s-diff-port-330197" cluster
	I1123 10:13:48.523808  529379 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:13:48.526748  529379 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:13:48.529550  529379 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:13:48.529592  529379 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:13:48.529601  529379 cache.go:65] Caching tarball of preloaded images
	I1123 10:13:48.529684  529379 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:13:48.529693  529379 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:13:48.529805  529379 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/config.json ...
	I1123 10:13:48.530032  529379 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:13:48.551346  529379 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:13:48.551363  529379 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:13:48.551378  529379 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:13:48.551408  529379 start.go:360] acquireMachinesLock for default-k8s-diff-port-330197: {Name:mke95bbd84696d9268c86469759951e95b68110b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:13:48.551459  529379 start.go:364] duration metric: took 34.233µs to acquireMachinesLock for "default-k8s-diff-port-330197"
	I1123 10:13:48.551478  529379 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:13:48.551483  529379 fix.go:54] fixHost starting: 
	I1123 10:13:48.551742  529379 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:13:48.570945  529379 fix.go:112] recreateIfNeeded on default-k8s-diff-port-330197: state=Stopped err=<nil>
	W1123 10:13:48.570973  529379 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 10:13:47.368489  529014 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:13:47.368771  529014 start.go:159] libmachine.API.Create for "newest-cni-499584" (driver="docker")
	I1123 10:13:47.368807  529014 client.go:173] LocalClient.Create starting
	I1123 10:13:47.368882  529014 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem
	I1123 10:13:47.368918  529014 main.go:143] libmachine: Decoding PEM data...
	I1123 10:13:47.368940  529014 main.go:143] libmachine: Parsing certificate...
	I1123 10:13:47.368998  529014 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem
	I1123 10:13:47.369019  529014 main.go:143] libmachine: Decoding PEM data...
	I1123 10:13:47.369034  529014 main.go:143] libmachine: Parsing certificate...
	I1123 10:13:47.369398  529014 cli_runner.go:164] Run: docker network inspect newest-cni-499584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:13:47.387436  529014 cli_runner.go:211] docker network inspect newest-cni-499584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:13:47.387523  529014 network_create.go:284] running [docker network inspect newest-cni-499584] to gather additional debugging logs...
	I1123 10:13:47.387540  529014 cli_runner.go:164] Run: docker network inspect newest-cni-499584
	W1123 10:13:47.403055  529014 cli_runner.go:211] docker network inspect newest-cni-499584 returned with exit code 1
	I1123 10:13:47.403083  529014 network_create.go:287] error running [docker network inspect newest-cni-499584]: docker network inspect newest-cni-499584: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-499584 not found
	I1123 10:13:47.403097  529014 network_create.go:289] output of [docker network inspect newest-cni-499584]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-499584 not found
	
	** /stderr **
	I1123 10:13:47.403202  529014 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:13:47.420117  529014 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d56166f18c3a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:f2:0f:1a:18:9c} reservation:<nil>}
	I1123 10:13:47.420750  529014 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fe6f7fd59576 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:8b:f7:8e:2b:59} reservation:<nil>}
	I1123 10:13:47.421183  529014 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c262e08021b1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:16:63:f0:32:b6} reservation:<nil>}
	I1123 10:13:47.421961  529014 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a30590}
	I1123 10:13:47.422036  529014 network_create.go:124] attempt to create docker network newest-cni-499584 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 10:13:47.422132  529014 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-499584 newest-cni-499584
	I1123 10:13:47.483694  529014 network_create.go:108] docker network newest-cni-499584 192.168.76.0/24 created
	I1123 10:13:47.483730  529014 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-499584" container
	I1123 10:13:47.483828  529014 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:13:47.521043  529014 cli_runner.go:164] Run: docker volume create newest-cni-499584 --label name.minikube.sigs.k8s.io=newest-cni-499584 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:13:47.542908  529014 oci.go:103] Successfully created a docker volume newest-cni-499584
	I1123 10:13:47.542998  529014 cli_runner.go:164] Run: docker run --rm --name newest-cni-499584-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-499584 --entrypoint /usr/bin/test -v newest-cni-499584:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:13:48.242061  529014 oci.go:107] Successfully prepared a docker volume newest-cni-499584
	I1123 10:13:48.242126  529014 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:13:48.242137  529014 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:13:48.242208  529014 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-499584:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:13:48.574326  529379 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-330197" ...
	I1123 10:13:48.574405  529379 cli_runner.go:164] Run: docker start default-k8s-diff-port-330197
	I1123 10:13:48.876897  529379 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:13:48.897722  529379 kic.go:430] container "default-k8s-diff-port-330197" state is running.
	I1123 10:13:48.898110  529379 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-330197
	I1123 10:13:48.924730  529379 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/config.json ...
	I1123 10:13:48.924964  529379 machine.go:94] provisionDockerMachine start ...
	I1123 10:13:48.925021  529379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:13:48.951951  529379 main.go:143] libmachine: Using SSH client type: native
	I1123 10:13:48.952285  529379 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33496 <nil> <nil>}
	I1123 10:13:48.952294  529379 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:13:48.953271  529379 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:13:52.109100  529379 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-330197
	
	I1123 10:13:52.109129  529379 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-330197"
	I1123 10:13:52.109221  529379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:13:52.129962  529379 main.go:143] libmachine: Using SSH client type: native
	I1123 10:13:52.130326  529379 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33496 <nil> <nil>}
	I1123 10:13:52.130346  529379 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-330197 && echo "default-k8s-diff-port-330197" | sudo tee /etc/hostname
	I1123 10:13:52.292936  529379 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-330197
	
	I1123 10:13:52.293015  529379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:13:52.311828  529379 main.go:143] libmachine: Using SSH client type: native
	I1123 10:13:52.312179  529379 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33496 <nil> <nil>}
	I1123 10:13:52.312202  529379 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-330197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-330197/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-330197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:13:52.469955  529379 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:13:52.469988  529379 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:13:52.470023  529379 ubuntu.go:190] setting up certificates
	I1123 10:13:52.470033  529379 provision.go:84] configureAuth start
	I1123 10:13:52.470098  529379 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-330197
	I1123 10:13:52.489604  529379 provision.go:143] copyHostCerts
	I1123 10:13:52.489676  529379 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:13:52.489689  529379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:13:52.489764  529379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:13:52.489868  529379 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:13:52.489880  529379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:13:52.489916  529379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:13:52.489977  529379 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:13:52.489987  529379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:13:52.490016  529379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:13:52.490069  529379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-330197 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-330197 localhost minikube]
	I1123 10:13:52.797455  529379 provision.go:177] copyRemoteCerts
	I1123 10:13:52.797522  529379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:13:52.797570  529379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:13:52.813938  529379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:13:52.921476  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:13:52.940685  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 10:13:52.960923  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:13:52.983958  529379 provision.go:87] duration metric: took 513.902999ms to configureAuth
	I1123 10:13:52.983987  529379 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:13:52.984202  529379 config.go:182] Loaded profile config "default-k8s-diff-port-330197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:13:52.984338  529379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:13:53.010936  529379 main.go:143] libmachine: Using SSH client type: native
	I1123 10:13:53.011238  529379 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33496 <nil> <nil>}
	I1123 10:13:53.011257  529379 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:13:53.530451  529379 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:13:53.530583  529379 machine.go:97] duration metric: took 4.605608346s to provisionDockerMachine
	I1123 10:13:53.530596  529379 start.go:293] postStartSetup for "default-k8s-diff-port-330197" (driver="docker")
	I1123 10:13:53.530608  529379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:13:53.530682  529379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:13:53.530725  529379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:13:53.577600  529379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:13:53.714723  529379 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:13:53.721616  529379 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:13:53.721644  529379 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:13:53.721656  529379 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:13:53.721709  529379 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:13:53.721798  529379 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:13:53.721911  529379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:13:53.731239  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:13:53.760310  529379 start.go:296] duration metric: took 229.681504ms for postStartSetup
	I1123 10:13:53.760412  529379 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:13:53.760473  529379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:13:53.806999  529379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:13:53.930850  529379 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:13:53.939319  529379 fix.go:56] duration metric: took 5.387828516s for fixHost
	I1123 10:13:53.939342  529379 start.go:83] releasing machines lock for "default-k8s-diff-port-330197", held for 5.387874129s
	I1123 10:13:53.939407  529379 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-330197
	I1123 10:13:53.964886  529379 ssh_runner.go:195] Run: cat /version.json
	I1123 10:13:53.964936  529379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:13:53.965179  529379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:13:53.965239  529379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:13:54.022663  529379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:13:54.030674  529379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:13:54.242813  529379 ssh_runner.go:195] Run: systemctl --version
	I1123 10:13:54.249718  529379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:13:54.306155  529379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:13:54.312738  529379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:13:54.312865  529379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:13:54.322834  529379 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:13:54.322921  529379 start.go:496] detecting cgroup driver to use...
	I1123 10:13:54.322984  529379 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:13:54.323072  529379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:13:54.347842  529379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:13:54.366927  529379 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:13:54.367041  529379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:13:54.388798  529379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:13:54.403907  529379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:13:54.599819  529379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:13:54.744797  529379 docker.go:234] disabling docker service ...
	I1123 10:13:54.744874  529379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:13:54.764700  529379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:13:54.779711  529379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:13:54.923127  529379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:13:55.048721  529379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:13:55.062138  529379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:13:55.076317  529379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:13:55.076401  529379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:13:55.085984  529379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:13:55.086064  529379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:13:55.095578  529379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:13:55.105020  529379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:13:55.114542  529379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:13:55.123545  529379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:13:55.132877  529379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:13:55.141582  529379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:13:55.150595  529379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:13:55.158405  529379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:13:55.166273  529379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:13:55.282652  529379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:13:55.468452  529379 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:13:55.468555  529379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:13:55.472469  529379 start.go:564] Will wait 60s for crictl version
	I1123 10:13:55.472555  529379 ssh_runner.go:195] Run: which crictl
	I1123 10:13:55.476223  529379 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:13:55.504476  529379 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:13:55.504570  529379 ssh_runner.go:195] Run: crio --version
	I1123 10:13:55.534914  529379 ssh_runner.go:195] Run: crio --version
	I1123 10:13:55.567065  529379 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:13:55.569865  529379 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-330197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:13:55.586352  529379 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 10:13:55.590406  529379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:13:55.600329  529379 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-330197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-330197 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:13:55.600479  529379 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:13:55.600544  529379 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:13:55.633476  529379 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:13:55.633501  529379 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:13:55.633554  529379 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:13:55.662599  529379 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:13:55.662624  529379 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:13:55.662632  529379 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1123 10:13:55.662734  529379 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-330197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-330197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:13:55.662827  529379 ssh_runner.go:195] Run: crio config
	I1123 10:13:55.735333  529379 cni.go:84] Creating CNI manager for ""
	I1123 10:13:55.735374  529379 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:13:55.735401  529379 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:13:55.735424  529379 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-330197 NodeName:default-k8s-diff-port-330197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:13:55.735561  529379 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-330197"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:13:55.735643  529379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:13:55.743777  529379 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:13:55.743872  529379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:13:55.751662  529379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1123 10:13:55.765272  529379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:13:55.777975  529379 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1123 10:13:55.790662  529379 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:13:55.794245  529379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:13:55.803870  529379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:13:55.920758  529379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:13:55.936009  529379 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197 for IP: 192.168.85.2
	I1123 10:13:55.936087  529379 certs.go:195] generating shared ca certs ...
	I1123 10:13:55.936127  529379 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:13:55.936332  529379 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:13:55.936421  529379 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:13:55.936450  529379 certs.go:257] generating profile certs ...
	I1123 10:13:55.936592  529379 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/client.key
	I1123 10:13:55.937627  529379 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.key.d6400e66
	I1123 10:13:55.937765  529379 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/proxy-client.key
	I1123 10:13:55.937934  529379 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:13:55.937978  529379 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:13:55.937992  529379 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:13:55.938034  529379 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:13:55.938064  529379 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:13:55.938098  529379 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:13:55.938172  529379 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:13:55.939169  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:13:55.962781  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:13:55.982315  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:13:56.002705  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:13:56.028917  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 10:13:56.051546  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 10:13:56.084436  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:13:56.112170  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/default-k8s-diff-port-330197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:13:56.138558  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:13:56.162516  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:13:56.182651  529379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:13:56.205215  529379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:13:56.218023  529379 ssh_runner.go:195] Run: openssl version
	I1123 10:13:56.224435  529379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:13:56.233511  529379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:13:56.237245  529379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:13:56.237311  529379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:13:56.279454  529379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:13:56.287232  529379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:13:56.295268  529379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:13:56.298750  529379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:13:56.298824  529379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:13:56.339823  529379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:13:56.347848  529379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:13:56.356067  529379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:13:56.359811  529379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:13:56.359883  529379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:13:56.400835  529379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:13:56.408855  529379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:13:56.412621  529379 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:13:56.454627  529379 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:13:56.495534  529379 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:13:56.536371  529379 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:13:56.584643  529379 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:13:56.643752  529379 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:13:56.715002  529379 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-330197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-330197 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:13:56.715144  529379 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:13:56.715251  529379 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:13:56.788341  529379 cri.go:89] found id: "42cc19608c6e58ebf338dc82a991b4cd9902c09d76a2fc3ad1709fb98fe71f1c"
	I1123 10:13:56.788415  529379 cri.go:89] found id: "f6adced2438dde36562063e35389aaa6f93406583a489e9200e01abeac6d2ba2"
	I1123 10:13:56.788439  529379 cri.go:89] found id: "49080a105e3a1028d971c78fae51a027ca689e779aae2b400ed02b743c540042"
	I1123 10:13:56.788459  529379 cri.go:89] found id: "fe2851bd5d0e209023685855c54c561683dab32a8f4e2ac4aad2e94044d6da28"
	I1123 10:13:56.788493  529379 cri.go:89] found id: ""
	I1123 10:13:56.788578  529379 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:13:56.810182  529379 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:13:56Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:13:56.810308  529379 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:13:56.827056  529379 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:13:56.827114  529379 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:13:56.827208  529379 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:13:56.835480  529379 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:13:56.835871  529379 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-330197" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:13:56.835975  529379 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-330197" cluster setting kubeconfig missing "default-k8s-diff-port-330197" context setting]
	I1123 10:13:56.836262  529379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:13:56.837587  529379 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:13:56.857225  529379 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 10:13:56.857261  529379 kubeadm.go:602] duration metric: took 30.118388ms to restartPrimaryControlPlane
	I1123 10:13:56.857271  529379 kubeadm.go:403] duration metric: took 142.279786ms to StartCluster
	I1123 10:13:56.857286  529379 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:13:56.857349  529379 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:13:56.858106  529379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:13:56.858312  529379 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:13:56.858593  529379 config.go:182] Loaded profile config "default-k8s-diff-port-330197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:13:56.858636  529379 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:13:56.858738  529379 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-330197"
	I1123 10:13:56.858760  529379 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-330197"
	W1123 10:13:56.858769  529379 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:13:56.858794  529379 host.go:66] Checking if "default-k8s-diff-port-330197" exists ...
	I1123 10:13:56.858788  529379 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-330197"
	I1123 10:13:56.858850  529379 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-330197"
	W1123 10:13:56.858869  529379 addons.go:248] addon dashboard should already be in state true
	I1123 10:13:56.858921  529379 host.go:66] Checking if "default-k8s-diff-port-330197" exists ...
	I1123 10:13:56.859335  529379 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:13:56.859464  529379 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:13:56.860097  529379 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-330197"
	I1123 10:13:56.860123  529379 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-330197"
	I1123 10:13:56.860413  529379 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:13:56.863288  529379 out.go:179] * Verifying Kubernetes components...
	I1123 10:13:56.866635  529379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:13:56.912975  529379 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:13:56.916262  529379 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:13:56.916285  529379 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:13:56.916348  529379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:13:56.922244  529379 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-330197"
	W1123 10:13:56.922266  529379 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:13:56.922290  529379 host.go:66] Checking if "default-k8s-diff-port-330197" exists ...
	I1123 10:13:56.922696  529379 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:13:56.931235  529379 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:13:56.934187  529379 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:13:53.009745  529014 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-499584:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.767483973s)
	I1123 10:13:53.009776  529014 kic.go:203] duration metric: took 4.767635229s to extract preloaded images to volume ...
	W1123 10:13:53.010099  529014 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 10:13:53.010296  529014 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:13:53.122657  529014 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-499584 --name newest-cni-499584 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-499584 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-499584 --network newest-cni-499584 --ip 192.168.76.2 --volume newest-cni-499584:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:13:53.499166  529014 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Running}}
	I1123 10:13:53.518973  529014 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:13:53.555987  529014 cli_runner.go:164] Run: docker exec newest-cni-499584 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:13:53.625998  529014 oci.go:144] the created container "newest-cni-499584" has a running status.
	I1123 10:13:53.626029  529014 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa...
	I1123 10:13:53.779462  529014 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:13:53.816209  529014 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:13:53.841764  529014 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:13:53.841785  529014 kic_runner.go:114] Args: [docker exec --privileged newest-cni-499584 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:13:53.890845  529014 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:13:53.917217  529014 machine.go:94] provisionDockerMachine start ...
	I1123 10:13:53.917337  529014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:13:53.950244  529014 main.go:143] libmachine: Using SSH client type: native
	I1123 10:13:53.950598  529014 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I1123 10:13:53.950621  529014 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:13:53.953645  529014 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:13:56.937136  529379 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:13:56.937167  529379 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:13:56.937238  529379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:13:56.979522  529379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:13:56.981827  529379 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:13:56.981842  529379 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:13:56.981901  529379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:13:57.013279  529379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:13:57.041576  529379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:13:57.244340  529379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:13:57.319562  529379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:13:57.328574  529379 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:13:57.328642  529379 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:13:57.403146  529379 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:13:57.403175  529379 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:13:57.468068  529379 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:13:57.468096  529379 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:13:57.508428  529379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:13:57.510948  529379 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:13:57.510974  529379 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:13:57.592936  529379 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:13:57.592963  529379 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:13:57.670768  529379 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:13:57.670785  529379 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:13:57.754208  529379 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:13:57.754235  529379 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:13:57.813988  529379 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:13:57.814017  529379 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:13:57.848844  529379 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:13:57.848872  529379 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:13:57.885531  529379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:13:57.173560  529014 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-499584
	
	I1123 10:13:57.173589  529014 ubuntu.go:182] provisioning hostname "newest-cni-499584"
	I1123 10:13:57.173651  529014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:13:57.207133  529014 main.go:143] libmachine: Using SSH client type: native
	I1123 10:13:57.207453  529014 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I1123 10:13:57.207471  529014 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-499584 && echo "newest-cni-499584" | sudo tee /etc/hostname
	I1123 10:13:57.413127  529014 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-499584
	
	I1123 10:13:57.413267  529014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:13:57.449640  529014 main.go:143] libmachine: Using SSH client type: native
	I1123 10:13:57.449953  529014 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I1123 10:13:57.449970  529014 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-499584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-499584/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-499584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:13:57.642088  529014 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:13:57.642158  529014 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:13:57.642196  529014 ubuntu.go:190] setting up certificates
	I1123 10:13:57.642235  529014 provision.go:84] configureAuth start
	I1123 10:13:57.642338  529014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:13:57.670028  529014 provision.go:143] copyHostCerts
	I1123 10:13:57.670099  529014 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:13:57.670108  529014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:13:57.670175  529014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:13:57.670298  529014 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:13:57.670304  529014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:13:57.670333  529014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:13:57.670384  529014 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:13:57.670389  529014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:13:57.670411  529014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:13:57.670454  529014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.newest-cni-499584 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-499584]
	I1123 10:13:57.811051  529014 provision.go:177] copyRemoteCerts
	I1123 10:13:57.811120  529014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:13:57.811166  529014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:13:57.832996  529014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:13:57.955848  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:13:57.984905  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:13:58.026785  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:13:58.062624  529014 provision.go:87] duration metric: took 420.349166ms to configureAuth
	I1123 10:13:58.062653  529014 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:13:58.062902  529014 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:13:58.063038  529014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:13:58.099275  529014 main.go:143] libmachine: Using SSH client type: native
	I1123 10:13:58.099588  529014 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I1123 10:13:58.099602  529014 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:13:58.567254  529014 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:13:58.567282  529014 machine.go:97] duration metric: took 4.650038425s to provisionDockerMachine
	I1123 10:13:58.567293  529014 client.go:176] duration metric: took 11.198475003s to LocalClient.Create
	I1123 10:13:58.567306  529014 start.go:167] duration metric: took 11.198535328s to libmachine.API.Create "newest-cni-499584"
	I1123 10:13:58.567313  529014 start.go:293] postStartSetup for "newest-cni-499584" (driver="docker")
	I1123 10:13:58.567347  529014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:13:58.567427  529014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:13:58.567480  529014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:13:58.588280  529014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:13:58.704677  529014 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:13:58.708647  529014 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:13:58.708683  529014 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:13:58.708696  529014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:13:58.708753  529014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:13:58.708846  529014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:13:58.708952  529014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:13:58.727495  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:13:58.761823  529014 start.go:296] duration metric: took 194.48563ms for postStartSetup
	I1123 10:13:58.762316  529014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:13:58.791934  529014 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/config.json ...
	I1123 10:13:58.792207  529014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:13:58.792256  529014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:13:58.837574  529014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:13:58.960176  529014 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:13:58.965772  529014 start.go:128] duration metric: took 11.600641526s to createHost
	I1123 10:13:58.965797  529014 start.go:83] releasing machines lock for "newest-cni-499584", held for 11.600769816s
	I1123 10:13:58.965866  529014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:13:58.989907  529014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:13:58.989985  529014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:13:58.990199  529014 ssh_runner.go:195] Run: cat /version.json
	I1123 10:13:58.990253  529014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:13:59.025551  529014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:13:59.027770  529014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:13:59.161184  529014 ssh_runner.go:195] Run: systemctl --version
	I1123 10:13:59.280116  529014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:13:59.355637  529014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:13:59.369929  529014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:13:59.370058  529014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:13:59.410216  529014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 10:13:59.410295  529014 start.go:496] detecting cgroup driver to use...
	I1123 10:13:59.410343  529014 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:13:59.410422  529014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:13:59.444373  529014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:13:59.463030  529014 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:13:59.463142  529014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:13:59.496834  529014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:13:59.526051  529014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:13:59.712989  529014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:13:59.949029  529014 docker.go:234] disabling docker service ...
	I1123 10:13:59.949096  529014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:13:59.993697  529014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:14:00.022165  529014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:14:00.246241  529014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:14:00.471105  529014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:14:00.494043  529014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:14:00.520794  529014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:14:00.520887  529014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:00.547024  529014 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:14:00.547114  529014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:00.576565  529014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:00.596547  529014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:00.613274  529014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:14:00.627345  529014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:00.640100  529014 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:00.659998  529014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:00.676093  529014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:14:00.689262  529014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:14:00.699930  529014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:00.889224  529014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:14:01.125316  529014 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:14:01.125424  529014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:14:01.131439  529014 start.go:564] Will wait 60s for crictl version
	I1123 10:14:01.131521  529014 ssh_runner.go:195] Run: which crictl
	I1123 10:14:01.136788  529014 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:14:01.193449  529014 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:14:01.193582  529014 ssh_runner.go:195] Run: crio --version
	I1123 10:14:01.253589  529014 ssh_runner.go:195] Run: crio --version
	I1123 10:14:01.303942  529014 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:14:01.306876  529014 cli_runner.go:164] Run: docker network inspect newest-cni-499584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:14:01.338217  529014 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:14:01.342307  529014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:14:01.363621  529014 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 10:14:01.366526  529014 kubeadm.go:884] updating cluster {Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:14:01.366674  529014 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:14:01.366750  529014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:14:01.454014  529014 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:14:01.454040  529014 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:14:01.454098  529014 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:14:01.523495  529014 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:14:01.523534  529014 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:14:01.523542  529014 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:14:01.523636  529014 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-499584 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:14:01.523725  529014 ssh_runner.go:195] Run: crio config
	I1123 10:14:01.636679  529014 cni.go:84] Creating CNI manager for ""
	I1123 10:14:01.636703  529014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:14:01.636724  529014 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 10:14:01.636759  529014 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-499584 NodeName:newest-cni-499584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:14:01.636908  529014 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-499584"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:14:01.636994  529014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:14:01.647683  529014 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:14:01.647771  529014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:14:01.664131  529014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:14:01.681594  529014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:14:01.696908  529014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1123 10:14:01.712010  529014 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:14:01.716140  529014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:14:01.726419  529014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:01.879447  529014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:14:01.901938  529014 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584 for IP: 192.168.76.2
	I1123 10:14:01.901973  529014 certs.go:195] generating shared ca certs ...
	I1123 10:14:01.901991  529014 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:01.902164  529014 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:14:01.902226  529014 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:14:01.902245  529014 certs.go:257] generating profile certs ...
	I1123 10:14:01.902311  529014 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/client.key
	I1123 10:14:01.902328  529014 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/client.crt with IP's: []
	I1123 10:14:02.261126  529014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/client.crt ...
	I1123 10:14:02.261152  529014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/client.crt: {Name:mka60ca0c3dbd095d71cfc3f5c4e4a73acb79004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:02.261330  529014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/client.key ...
	I1123 10:14:02.261339  529014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/client.key: {Name:mke3178397ee3ee10121fcb4625290aa9b14fde7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:02.261517  529014 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key.22d7de13
	I1123 10:14:02.261536  529014 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.crt.22d7de13 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 10:14:02.305952  529014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.crt.22d7de13 ...
	I1123 10:14:02.306041  529014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.crt.22d7de13: {Name:mk84bb366485978143150a0ad43da06982f24646 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:02.306255  529014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key.22d7de13 ...
	I1123 10:14:02.306297  529014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key.22d7de13: {Name:mkfccb7b89848a6decf5cc785ca16d5921ab8136 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:02.306432  529014 certs.go:382] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.crt.22d7de13 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.crt
	I1123 10:14:02.306567  529014 certs.go:386] copying /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key.22d7de13 -> /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key
	I1123 10:14:02.306656  529014 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.key
	I1123 10:14:02.306709  529014 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.crt with IP's: []
	I1123 10:14:02.722562  529014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.crt ...
	I1123 10:14:02.722641  529014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.crt: {Name:mk0be53aadfed03915aacc12739c64ed51957d97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:02.722892  529014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.key ...
	I1123 10:14:02.722930  529014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.key: {Name:mk70a732a661f5e5becca4a7083a003e03f854f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:02.723192  529014 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:14:02.723263  529014 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:14:02.723305  529014 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:14:02.723357  529014 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:14:02.723419  529014 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:14:02.723470  529014 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:14:02.723564  529014 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:14:02.724185  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:14:02.759699  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:14:02.797276  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:14:02.843711  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:14:02.895790  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:14:02.971259  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:14:03.011830  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:14:03.042854  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:14:03.079587  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:14:03.114168  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:14:03.158951  529014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:14:03.191014  529014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:14:03.220882  529014 ssh_runner.go:195] Run: openssl version
	I1123 10:14:03.230533  529014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:14:03.242454  529014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:03.246554  529014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:03.246662  529014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:03.309392  529014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:14:03.318890  529014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:14:03.332851  529014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:14:03.337947  529014 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:14:03.338051  529014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:14:03.413307  529014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:14:03.426716  529014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:14:03.438936  529014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:14:03.448503  529014 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:14:03.448596  529014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:14:03.508250  529014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:14:03.526595  529014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:14:03.533503  529014 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:14:03.533588  529014 kubeadm.go:401] StartCluster: {Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:14:03.533724  529014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:14:03.533848  529014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:14:03.596558  529014 cri.go:89] found id: ""
	I1123 10:14:03.596650  529014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:14:03.611155  529014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:14:03.622515  529014 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:14:03.622644  529014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:14:03.637645  529014 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:14:03.637669  529014 kubeadm.go:158] found existing configuration files:
	
	I1123 10:14:03.637761  529014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:14:03.650917  529014 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:14:03.651021  529014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:14:03.667257  529014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:14:03.682307  529014 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:14:03.682398  529014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:14:03.690962  529014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:14:03.703092  529014 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:14:03.703181  529014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:14:03.714798  529014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:14:03.727146  529014 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:14:03.727237  529014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:14:03.735713  529014 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:14:03.807322  529014 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:14:03.807477  529014 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:14:03.856946  529014 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:14:03.857074  529014 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:14:03.857139  529014 kubeadm.go:319] OS: Linux
	I1123 10:14:03.857200  529014 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:14:03.857276  529014 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:14:03.857347  529014 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:14:03.857423  529014 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:14:03.857508  529014 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:14:03.857637  529014 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:14:03.857723  529014 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:14:03.857819  529014 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:14:03.857913  529014 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:14:03.986923  529014 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:14:03.987115  529014 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:14:03.987251  529014 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:14:04.017796  529014 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:14:04.023530  529014 out.go:252]   - Generating certificates and keys ...
	I1123 10:14:04.023695  529014 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:14:04.024020  529014 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:14:04.805904  529014 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:14:05.297929  529014 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:14:05.629623  529014 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:14:06.398202  529014 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:14:06.642176  529014 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:14:06.642762  529014 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-499584] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 10:14:07.259027  529379 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.014655749s)
	I1123 10:14:07.259076  529379 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.939494648s)
	I1123 10:14:07.259097  529379 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-330197" to be "Ready" ...
	I1123 10:14:07.259402  529379 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.750946269s)
	I1123 10:14:07.259631  529379 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.374067047s)
	I1123 10:14:07.262679  529379 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-330197 addons enable metrics-server
	
	I1123 10:14:07.297387  529379 node_ready.go:49] node "default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:07.297441  529379 node_ready.go:38] duration metric: took 38.32504ms for node "default-k8s-diff-port-330197" to be "Ready" ...
	I1123 10:14:07.297456  529379 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:14:07.297526  529379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:14:07.310373  529379 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 10:14:07.313592  529379 addons.go:530] duration metric: took 10.454943341s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:14:07.330177  529379 api_server.go:72] duration metric: took 10.471828417s to wait for apiserver process to appear ...
	I1123 10:14:07.330205  529379 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:14:07.330234  529379 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 10:14:07.340384  529379 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 10:14:07.341934  529379 api_server.go:141] control plane version: v1.34.1
	I1123 10:14:07.341976  529379 api_server.go:131] duration metric: took 11.762583ms to wait for apiserver health ...
	I1123 10:14:07.341986  529379 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:14:07.346445  529379 system_pods.go:59] 8 kube-system pods found
	I1123 10:14:07.346492  529379 system_pods.go:61] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:14:07.346503  529379 system_pods.go:61] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:14:07.346511  529379 system_pods.go:61] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:14:07.346518  529379 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:14:07.346530  529379 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:14:07.346535  529379 system_pods.go:61] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:14:07.346542  529379 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:14:07.346556  529379 system_pods.go:61] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Running
	I1123 10:14:07.346562  529379 system_pods.go:74] duration metric: took 4.570756ms to wait for pod list to return data ...
	I1123 10:14:07.346575  529379 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:14:07.349531  529379 default_sa.go:45] found service account: "default"
	I1123 10:14:07.349565  529379 default_sa.go:55] duration metric: took 2.983522ms for default service account to be created ...
	I1123 10:14:07.349585  529379 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:14:07.354891  529379 system_pods.go:86] 8 kube-system pods found
	I1123 10:14:07.354930  529379 system_pods.go:89] "coredns-66bc5c9577-pphv6" [0a9030ea-483e-46e0-8d24-2b0dd1fe99ff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:14:07.354948  529379 system_pods.go:89] "etcd-default-k8s-diff-port-330197" [04e76740-6a3c-4f4e-9b5d-2c8999bef68a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:14:07.354958  529379 system_pods.go:89] "kindnet-wfv8n" [aa574e11-da93-494e-8803-f1af18bb542d] Running
	I1123 10:14:07.354965  529379 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-330197" [abfd2542-91c6-409a-b0bf-6b1cf4f427e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:14:07.354978  529379 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-330197" [c343422c-7b25-41eb-aca3-ae06812b0f50] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:14:07.354984  529379 system_pods.go:89] "kube-proxy-75qqt" [e9999f1a-4069-470f-9b88-f9bff97ff125] Running
	I1123 10:14:07.354996  529379 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-330197" [010409e0-e0ee-4de9-a9e6-23ea4a90a923] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:14:07.355000  529379 system_pods.go:89] "storage-provisioner" [41502cc7-b934-4a0a-911f-9fb784b38dc3] Running
	I1123 10:14:07.355007  529379 system_pods.go:126] duration metric: took 5.416152ms to wait for k8s-apps to be running ...
	I1123 10:14:07.355027  529379 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:14:07.355086  529379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:14:07.377691  529379 system_svc.go:56] duration metric: took 22.653719ms WaitForService to wait for kubelet
	I1123 10:14:07.377733  529379 kubeadm.go:587] duration metric: took 10.519390102s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:14:07.377750  529379 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:14:07.381021  529379 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:14:07.381055  529379 node_conditions.go:123] node cpu capacity is 2
	I1123 10:14:07.381078  529379 node_conditions.go:105] duration metric: took 3.313981ms to run NodePressure ...
	I1123 10:14:07.381096  529379 start.go:242] waiting for startup goroutines ...
	I1123 10:14:07.381104  529379 start.go:247] waiting for cluster config update ...
	I1123 10:14:07.381119  529379 start.go:256] writing updated cluster config ...
	I1123 10:14:07.381475  529379 ssh_runner.go:195] Run: rm -f paused
	I1123 10:14:07.386141  529379 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:14:07.390308  529379 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pphv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:07.457393  529014 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:14:07.458033  529014 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-499584] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 10:14:07.812528  529014 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:14:07.893921  529014 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:14:08.353509  529014 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:14:08.354117  529014 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:14:09.173039  529014 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:14:09.591855  529014 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:14:10.143721  529014 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:14:10.913759  529014 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:14:11.137787  529014 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:14:11.137985  529014 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:14:11.139301  529014 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:14:11.142744  529014 out.go:252]   - Booting up control plane ...
	I1123 10:14:11.142917  529014 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:14:11.143049  529014 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:14:11.149656  529014 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:14:11.185440  529014 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:14:11.185550  529014 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:14:11.194711  529014 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:14:11.194818  529014 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:14:11.194858  529014 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:14:11.358472  529014 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:14:11.358593  529014 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1123 10:14:09.396925  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	W1123 10:14:11.897154  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	I1123 10:14:12.861781  529014 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500990171s
	I1123 10:14:12.863302  529014 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:14:12.863655  529014 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 10:14:12.864366  529014 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:14:12.865850  529014 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1123 10:14:13.901396  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	W1123 10:14:16.395760  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	I1123 10:14:18.445307  529014 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.579058326s
	W1123 10:14:18.897928  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	W1123 10:14:21.396837  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	I1123 10:14:22.875401  529014 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.011256302s
	I1123 10:14:23.352228  529014 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.485669548s
	I1123 10:14:23.379432  529014 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:14:23.396861  529014 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:14:23.420892  529014 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:14:23.421119  529014 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-499584 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:14:23.436598  529014 kubeadm.go:319] [bootstrap-token] Using token: xxq0u3.60o9tzhfsvwwrm1o
	I1123 10:14:23.439632  529014 out.go:252]   - Configuring RBAC rules ...
	I1123 10:14:23.439772  529014 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:14:23.447191  529014 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:14:23.456942  529014 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:14:23.461964  529014 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:14:23.466537  529014 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:14:23.477944  529014 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:14:23.760374  529014 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:14:24.195610  529014 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:14:24.759935  529014 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:14:24.761053  529014 kubeadm.go:319] 
	I1123 10:14:24.761135  529014 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:14:24.761146  529014 kubeadm.go:319] 
	I1123 10:14:24.761244  529014 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:14:24.761286  529014 kubeadm.go:319] 
	I1123 10:14:24.761357  529014 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:14:24.761482  529014 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:14:24.761583  529014 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:14:24.761595  529014 kubeadm.go:319] 
	I1123 10:14:24.761650  529014 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:14:24.761660  529014 kubeadm.go:319] 
	I1123 10:14:24.761708  529014 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:14:24.761716  529014 kubeadm.go:319] 
	I1123 10:14:24.761768  529014 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:14:24.761848  529014 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:14:24.761920  529014 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:14:24.761927  529014 kubeadm.go:319] 
	I1123 10:14:24.762029  529014 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:14:24.762110  529014 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:14:24.762139  529014 kubeadm.go:319] 
	I1123 10:14:24.762235  529014 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xxq0u3.60o9tzhfsvwwrm1o \
	I1123 10:14:24.762345  529014 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e \
	I1123 10:14:24.762376  529014 kubeadm.go:319] 	--control-plane 
	I1123 10:14:24.762384  529014 kubeadm.go:319] 
	I1123 10:14:24.762475  529014 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:14:24.762483  529014 kubeadm.go:319] 
	I1123 10:14:24.762578  529014 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xxq0u3.60o9tzhfsvwwrm1o \
	I1123 10:14:24.762688  529014 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:887f8119ffe4d5a917d34cb24e0eb6ba3996e6bcce8cd575315ae96526a54a7e 
	I1123 10:14:24.767255  529014 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 10:14:24.767493  529014 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 10:14:24.767606  529014 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:14:24.767627  529014 cni.go:84] Creating CNI manager for ""
	I1123 10:14:24.767635  529014 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:14:24.770849  529014 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:14:24.773811  529014 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:14:24.777834  529014 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:14:24.777894  529014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:14:24.791106  529014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:14:25.100606  529014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:14:25.100736  529014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:14:25.100810  529014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-499584 minikube.k8s.io/updated_at=2025_11_23T10_14_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=newest-cni-499584 minikube.k8s.io/primary=true
	I1123 10:14:25.293023  529014 ops.go:34] apiserver oom_adj: -16
	I1123 10:14:25.293232  529014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:14:25.794144  529014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:14:26.293788  529014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:14:26.794169  529014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1123 10:14:23.397020  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	W1123 10:14:25.896744  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	I1123 10:14:27.294081  529014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:14:27.793306  529014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:14:28.293349  529014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:14:28.793337  529014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:14:28.956782  529014 kubeadm.go:1114] duration metric: took 3.856090759s to wait for elevateKubeSystemPrivileges
	I1123 10:14:28.956812  529014 kubeadm.go:403] duration metric: took 25.423231122s to StartCluster
	I1123 10:14:28.956829  529014 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:28.956888  529014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:14:28.958017  529014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:28.958261  529014 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:14:28.958359  529014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:14:28.958623  529014 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:28.958665  529014 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:14:28.958730  529014 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-499584"
	I1123 10:14:28.958748  529014 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-499584"
	I1123 10:14:28.958769  529014 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:28.959344  529014 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:28.960019  529014 addons.go:70] Setting default-storageclass=true in profile "newest-cni-499584"
	I1123 10:14:28.960046  529014 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-499584"
	I1123 10:14:28.960363  529014 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:28.962981  529014 out.go:179] * Verifying Kubernetes components...
	I1123 10:14:28.967988  529014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:28.990354  529014 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:14:28.994362  529014 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:14:28.994392  529014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:14:28.994479  529014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:29.011626  529014 addons.go:239] Setting addon default-storageclass=true in "newest-cni-499584"
	I1123 10:14:29.011671  529014 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:29.012144  529014 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:29.037115  529014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:29.052831  529014 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:14:29.052852  529014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:14:29.052914  529014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:29.080190  529014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:29.330559  529014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:14:29.330752  529014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:14:29.434027  529014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:14:29.463436  529014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:14:29.750105  529014 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 10:14:29.751051  529014 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:14:29.752083  529014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:14:30.246527  529014 api_server.go:72] duration metric: took 1.288238755s to wait for apiserver process to appear ...
	I1123 10:14:30.246554  529014 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:14:30.246570  529014 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:14:30.264082  529014 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:14:30.266558  529014 api_server.go:141] control plane version: v1.34.1
	I1123 10:14:30.266598  529014 api_server.go:131] duration metric: took 20.037142ms to wait for apiserver health ...
	I1123 10:14:30.266608  529014 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:14:30.271378  529014 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:14:30.275213  529014 addons.go:530] duration metric: took 1.316538886s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 10:14:30.276741  529014 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-499584" context rescaled to 1 replicas
	I1123 10:14:30.279900  529014 system_pods.go:59] 8 kube-system pods found
	I1123 10:14:30.279939  529014 system_pods.go:61] "coredns-66bc5c9577-gpv4n" [3ac78ff6-250d-4ce6-ba6f-913ba5a46be8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:14:30.279949  529014 system_pods.go:61] "etcd-newest-cni-499584" [fbc5fde9-9d75-41ee-a27e-bea9e43c5c1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:14:30.279955  529014 system_pods.go:61] "kindnet-8pwmm" [3933503c-90da-4b79-98e7-e4a22d58813d] Running
	I1123 10:14:30.279960  529014 system_pods.go:61] "kube-apiserver-newest-cni-499584" [2a4c121c-305b-4eef-8b3a-127a1fef8812] Running
	I1123 10:14:30.279966  529014 system_pods.go:61] "kube-controller-manager-newest-cni-499584" [c00e062c-870f-4ed7-a05d-615fc6c7d81d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:14:30.279971  529014 system_pods.go:61] "kube-proxy-7ccmv" [8dace15f-cf56-4d36-9840-ceb07d85b8b0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 10:14:30.279976  529014 system_pods.go:61] "kube-scheduler-newest-cni-499584" [94684fe3-8d3e-4f48-9dad-6f0c6414f3c2] Running
	I1123 10:14:30.279981  529014 system_pods.go:61] "storage-provisioner" [70f72df9-2a87-468c-9f4c-2df81d587a29] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:14:30.279988  529014 system_pods.go:74] duration metric: took 13.373718ms to wait for pod list to return data ...
	I1123 10:14:30.280003  529014 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:14:30.282723  529014 default_sa.go:45] found service account: "default"
	I1123 10:14:30.282751  529014 default_sa.go:55] duration metric: took 2.741631ms for default service account to be created ...
	I1123 10:14:30.282765  529014 kubeadm.go:587] duration metric: took 1.324480239s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:14:30.282782  529014 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:14:30.287857  529014 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:14:30.287935  529014 node_conditions.go:123] node cpu capacity is 2
	I1123 10:14:30.287963  529014 node_conditions.go:105] duration metric: took 5.175803ms to run NodePressure ...
	I1123 10:14:30.288012  529014 start.go:242] waiting for startup goroutines ...
	I1123 10:14:30.288037  529014 start.go:247] waiting for cluster config update ...
	I1123 10:14:30.288060  529014 start.go:256] writing updated cluster config ...
	I1123 10:14:30.288405  529014 ssh_runner.go:195] Run: rm -f paused
	I1123 10:14:30.372024  529014 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:14:30.375330  529014 out.go:179] * Done! kubectl is now configured to use "newest-cni-499584" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.061706041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.086858349Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5c01ff31-0bba-4a87-8bd2-3b01a1ab0183 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.097800192Z" level=info msg="Ran pod sandbox 60846d3e8e9928739ab167b2234f6f85c7454b1f089a543b85d13f4dbe9ee9c4 with infra container: kube-system/kindnet-8pwmm/POD" id=5c01ff31-0bba-4a87-8bd2-3b01a1ab0183 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.103919616Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b4b36925-d506-4447-8d31-56549e521abc name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.11903508Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=04d42dd3-e6fa-4ae6-a477-7267164ca7dd name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.129901524Z" level=info msg="Creating container: kube-system/kindnet-8pwmm/kindnet-cni" id=6eb71d53-7495-4bbd-8b30-ba9b6235124c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.130032726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.14395229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.147947915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.211011764Z" level=info msg="Created container a3b3132ee431c5129fc1058ac530c9d04ac933580c4b9146621e222c83e90c28: kube-system/kindnet-8pwmm/kindnet-cni" id=6eb71d53-7495-4bbd-8b30-ba9b6235124c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.214451851Z" level=info msg="Starting container: a3b3132ee431c5129fc1058ac530c9d04ac933580c4b9146621e222c83e90c28" id=5e70b068-5a2d-4ec3-9af8-9b1710dec15b name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.217973302Z" level=info msg="Started container" PID=1430 containerID=a3b3132ee431c5129fc1058ac530c9d04ac933580c4b9146621e222c83e90c28 description=kube-system/kindnet-8pwmm/kindnet-cni id=5e70b068-5a2d-4ec3-9af8-9b1710dec15b name=/runtime.v1.RuntimeService/StartContainer sandboxID=60846d3e8e9928739ab167b2234f6f85c7454b1f089a543b85d13f4dbe9ee9c4
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.368776139Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-7ccmv/POD" id=1ae52f53-fbdf-4352-8e1b-9448fc33c35f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.368847394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.376201619Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1ae52f53-fbdf-4352-8e1b-9448fc33c35f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.380287607Z" level=info msg="Ran pod sandbox 1db5b394d14009a27b76ea9ba0f42f809d916638b9da3161726c6f3e0cb0bcf2 with infra container: kube-system/kube-proxy-7ccmv/POD" id=1ae52f53-fbdf-4352-8e1b-9448fc33c35f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.382781038Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=965c7782-9c9a-41f7-bde7-68ed975a391c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.383871229Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0802a0d0-0f1d-4f46-9423-0166190d13e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.389549809Z" level=info msg="Creating container: kube-system/kube-proxy-7ccmv/kube-proxy" id=c86f3e85-f0f8-4927-a8b1-e3af22a0d660 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.389916683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.410355127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.410864485Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.443911195Z" level=info msg="Created container 84ea5ce906a245e3f857cc937b4eb161dc3fb9d366f9b7f97180b680600d3fda: kube-system/kube-proxy-7ccmv/kube-proxy" id=c86f3e85-f0f8-4927-a8b1-e3af22a0d660 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.44775969Z" level=info msg="Starting container: 84ea5ce906a245e3f857cc937b4eb161dc3fb9d366f9b7f97180b680600d3fda" id=f39116fe-cf99-47ea-9405-65e7280d920d name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:14:29 newest-cni-499584 crio[842]: time="2025-11-23T10:14:29.453595336Z" level=info msg="Started container" PID=1490 containerID=84ea5ce906a245e3f857cc937b4eb161dc3fb9d366f9b7f97180b680600d3fda description=kube-system/kube-proxy-7ccmv/kube-proxy id=f39116fe-cf99-47ea-9405-65e7280d920d name=/runtime.v1.RuntimeService/StartContainer sandboxID=1db5b394d14009a27b76ea9ba0f42f809d916638b9da3161726c6f3e0cb0bcf2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	84ea5ce906a24       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   1db5b394d1400       kube-proxy-7ccmv                            kube-system
	a3b3132ee431c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   60846d3e8e992       kindnet-8pwmm                               kube-system
	c9c323c8a602d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago      Running             kube-scheduler            0                   768c834ad01a7       kube-scheduler-newest-cni-499584            kube-system
	09277f08dc6db       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago      Running             kube-apiserver            0                   2e52c73ad824c       kube-apiserver-newest-cni-499584            kube-system
	5ec09b6fc1519       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago      Running             etcd                      0                   b56bfdb49aaad       etcd-newest-cni-499584                      kube-system
	4ecad63e6d4cd       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago      Running             kube-controller-manager   0                   347a3997c8f0c       kube-controller-manager-newest-cni-499584   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-499584
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-499584
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=newest-cni-499584
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_14_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:14:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-499584
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:14:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:14:24 +0000   Sun, 23 Nov 2025 10:14:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:14:24 +0000   Sun, 23 Nov 2025 10:14:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:14:24 +0000   Sun, 23 Nov 2025 10:14:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 10:14:24 +0000   Sun, 23 Nov 2025 10:14:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-499584
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                d0df54c2-215f-48c8-868a-6c3e0d8ae69f
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-499584                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7s
	  kube-system                 kindnet-8pwmm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-499584             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-499584    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-7ccmv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-499584             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Normal   Starting                 19s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 19s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node newest-cni-499584 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node newest-cni-499584 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x8 over 19s)  kubelet          Node newest-cni-499584 status is now: NodeHasSufficientPID
	  Normal   Starting                 7s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 7s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-499584 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-499584 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s                 kubelet          Node newest-cni-499584 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-499584 event: Registered Node newest-cni-499584 in Controller
	
	
	==> dmesg <==
	[Nov23 09:50] overlayfs: idmapped layers are currently not supported
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	[Nov23 10:12] overlayfs: idmapped layers are currently not supported
	[ +16.378417] overlayfs: idmapped layers are currently not supported
	[Nov23 10:13] overlayfs: idmapped layers are currently not supported
	[Nov23 10:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5ec09b6fc1519ca8664db432ee827d009063bcbb6a3ec6eead22a78c22fbbaf2] <==
	{"level":"warn","ts":"2025-11-23T10:14:17.350364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.396036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.461792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.529121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.579893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.634634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.659102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.694134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.712783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.743883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.821516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.822703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.834706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.868324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.892690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.915968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.956960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:17.998591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:18.032661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:18.062873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:18.114213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:18.162570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:18.188659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:18.243723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:18.474493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55626","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:14:31 up  2:57,  0 user,  load average: 6.14, 4.94, 3.80
	Linux newest-cni-499584 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a3b3132ee431c5129fc1058ac530c9d04ac933580c4b9146621e222c83e90c28] <==
	I1123 10:14:29.276754       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:14:29.277012       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:14:29.277135       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:14:29.277147       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:14:29.277157       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:14:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:14:29.483435       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:14:29.483453       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:14:29.483460       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:14:29.483692       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [09277f08dc6dbcbbc9503e6c20ab5fa06ad10b58e8f0748a19bd0df7596e6e57] <==
	I1123 10:14:19.970163       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 10:14:20.010668       1 controller.go:667] quota admission added evaluator for: namespaces
	E1123 10:14:20.032283       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1123 10:14:20.069359       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:14:20.072500       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 10:14:20.103448       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:14:20.103522       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:14:20.152263       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:14:20.690770       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:14:20.703319       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:14:20.703357       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:14:22.279446       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:14:22.360412       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:14:22.508759       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:14:22.529499       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 10:14:22.535823       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:14:22.545966       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:14:22.765672       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:14:24.170949       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:14:24.194347       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:14:24.210089       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:14:27.877114       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:14:27.888267       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:14:28.511688       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:14:28.711978       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [4ecad63e6d4cd4f56796ef18276001fa8bd4720642296141de776e95eb766836] <==
	I1123 10:14:27.787140       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:14:27.795162       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-499584" podCIDRs=["10.42.0.0/24"]
	I1123 10:14:27.799978       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:14:27.800099       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:14:27.806094       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:14:27.806202       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:14:27.806372       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:14:27.806501       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-499584"
	I1123 10:14:27.806597       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 10:14:27.806906       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:14:27.806996       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:14:27.807282       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:14:27.807361       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:14:27.807176       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:14:27.807199       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:14:27.809085       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:14:27.809237       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:14:27.810611       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 10:14:27.810935       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 10:14:27.813028       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 10:14:27.813111       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:14:27.814389       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 10:14:27.820286       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:14:27.826376       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:14:27.830846       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	
	
	==> kube-proxy [84ea5ce906a245e3f857cc937b4eb161dc3fb9d366f9b7f97180b680600d3fda] <==
	I1123 10:14:29.577480       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:14:29.655027       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:14:29.756722       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:14:29.756760       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 10:14:29.756845       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:14:29.887494       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:14:29.887615       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:14:29.901640       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:14:29.902020       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:14:29.902077       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:14:29.903420       1 config.go:200] "Starting service config controller"
	I1123 10:14:29.903618       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:14:29.903674       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:14:29.903704       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:14:29.903740       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:14:29.903767       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:14:29.904438       1 config.go:309] "Starting node config controller"
	I1123 10:14:29.907013       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:14:29.907084       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:14:30.011164       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:14:30.011208       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:14:30.011257       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c9c323c8a602dcc7383553e73338e41e7b4719c85c2f8f2fd800074659b26a95] <==
	I1123 10:14:17.420340       1 serving.go:386] Generated self-signed cert in-memory
	I1123 10:14:23.323357       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:14:23.323405       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:14:23.330177       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 10:14:23.330228       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 10:14:23.330265       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:14:23.330272       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:14:23.330285       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:14:23.330299       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:14:23.333928       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:14:23.334069       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:14:23.431202       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 10:14:23.431277       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 10:14:23.431363       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:14:24 newest-cni-499584 kubelet[1303]: I1123 10:14:24.531556    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3979a1427ce41233d285307602e3a35-kubeconfig\") pod \"kube-controller-manager-newest-cni-499584\" (UID: \"d3979a1427ce41233d285307602e3a35\") " pod="kube-system/kube-controller-manager-newest-cni-499584"
	Nov 23 10:14:24 newest-cni-499584 kubelet[1303]: I1123 10:14:24.531578    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3b651cdceb2a0cf95cee7db7f79e0a6-kubeconfig\") pod \"kube-scheduler-newest-cni-499584\" (UID: \"c3b651cdceb2a0cf95cee7db7f79e0a6\") " pod="kube-system/kube-scheduler-newest-cni-499584"
	Nov 23 10:14:24 newest-cni-499584 kubelet[1303]: I1123 10:14:24.531593    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a21fa638f0221efa709009d05cbd760f-ca-certs\") pod \"kube-apiserver-newest-cni-499584\" (UID: \"a21fa638f0221efa709009d05cbd760f\") " pod="kube-system/kube-apiserver-newest-cni-499584"
	Nov 23 10:14:24 newest-cni-499584 kubelet[1303]: I1123 10:14:24.531610    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d3979a1427ce41233d285307602e3a35-flexvolume-dir\") pod \"kube-controller-manager-newest-cni-499584\" (UID: \"d3979a1427ce41233d285307602e3a35\") " pod="kube-system/kube-controller-manager-newest-cni-499584"
	Nov 23 10:14:25 newest-cni-499584 kubelet[1303]: I1123 10:14:25.095844    1303 apiserver.go:52] "Watching apiserver"
	Nov 23 10:14:25 newest-cni-499584 kubelet[1303]: I1123 10:14:25.129214    1303 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 10:14:25 newest-cni-499584 kubelet[1303]: I1123 10:14:25.190507    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-499584" podStartSLOduration=1.190487718 podStartE2EDuration="1.190487718s" podCreationTimestamp="2025-11-23 10:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:14:25.175868582 +0000 UTC m=+1.178054694" watchObservedRunningTime="2025-11-23 10:14:25.190487718 +0000 UTC m=+1.192673831"
	Nov 23 10:14:25 newest-cni-499584 kubelet[1303]: I1123 10:14:25.191903    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-499584" podStartSLOduration=1.191858308 podStartE2EDuration="1.191858308s" podCreationTimestamp="2025-11-23 10:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:14:25.190441801 +0000 UTC m=+1.192627939" watchObservedRunningTime="2025-11-23 10:14:25.191858308 +0000 UTC m=+1.194044420"
	Nov 23 10:14:25 newest-cni-499584 kubelet[1303]: I1123 10:14:25.197215    1303 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-499584"
	Nov 23 10:14:25 newest-cni-499584 kubelet[1303]: E1123 10:14:25.213491    1303 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-499584\" already exists" pod="kube-system/etcd-newest-cni-499584"
	Nov 23 10:14:25 newest-cni-499584 kubelet[1303]: I1123 10:14:25.241219    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-499584" podStartSLOduration=1.241158346 podStartE2EDuration="1.241158346s" podCreationTimestamp="2025-11-23 10:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:14:25.226570226 +0000 UTC m=+1.228756347" watchObservedRunningTime="2025-11-23 10:14:25.241158346 +0000 UTC m=+1.243344459"
	Nov 23 10:14:27 newest-cni-499584 kubelet[1303]: I1123 10:14:27.878322    1303 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 10:14:27 newest-cni-499584 kubelet[1303]: I1123 10:14:27.879958    1303 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 10:14:28 newest-cni-499584 kubelet[1303]: I1123 10:14:28.759329    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxmtg\" (UniqueName: \"kubernetes.io/projected/3933503c-90da-4b79-98e7-e4a22d58813d-kube-api-access-jxmtg\") pod \"kindnet-8pwmm\" (UID: \"3933503c-90da-4b79-98e7-e4a22d58813d\") " pod="kube-system/kindnet-8pwmm"
	Nov 23 10:14:28 newest-cni-499584 kubelet[1303]: I1123 10:14:28.759478    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3933503c-90da-4b79-98e7-e4a22d58813d-cni-cfg\") pod \"kindnet-8pwmm\" (UID: \"3933503c-90da-4b79-98e7-e4a22d58813d\") " pod="kube-system/kindnet-8pwmm"
	Nov 23 10:14:28 newest-cni-499584 kubelet[1303]: I1123 10:14:28.759501    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3933503c-90da-4b79-98e7-e4a22d58813d-xtables-lock\") pod \"kindnet-8pwmm\" (UID: \"3933503c-90da-4b79-98e7-e4a22d58813d\") " pod="kube-system/kindnet-8pwmm"
	Nov 23 10:14:28 newest-cni-499584 kubelet[1303]: I1123 10:14:28.759536    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3933503c-90da-4b79-98e7-e4a22d58813d-lib-modules\") pod \"kindnet-8pwmm\" (UID: \"3933503c-90da-4b79-98e7-e4a22d58813d\") " pod="kube-system/kindnet-8pwmm"
	Nov 23 10:14:28 newest-cni-499584 kubelet[1303]: I1123 10:14:28.904148    1303 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 10:14:28 newest-cni-499584 kubelet[1303]: I1123 10:14:28.964325    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpjqv\" (UniqueName: \"kubernetes.io/projected/8dace15f-cf56-4d36-9840-ceb07d85b8b0-kube-api-access-vpjqv\") pod \"kube-proxy-7ccmv\" (UID: \"8dace15f-cf56-4d36-9840-ceb07d85b8b0\") " pod="kube-system/kube-proxy-7ccmv"
	Nov 23 10:14:28 newest-cni-499584 kubelet[1303]: I1123 10:14:28.964389    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8dace15f-cf56-4d36-9840-ceb07d85b8b0-kube-proxy\") pod \"kube-proxy-7ccmv\" (UID: \"8dace15f-cf56-4d36-9840-ceb07d85b8b0\") " pod="kube-system/kube-proxy-7ccmv"
	Nov 23 10:14:28 newest-cni-499584 kubelet[1303]: I1123 10:14:28.964412    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dace15f-cf56-4d36-9840-ceb07d85b8b0-lib-modules\") pod \"kube-proxy-7ccmv\" (UID: \"8dace15f-cf56-4d36-9840-ceb07d85b8b0\") " pod="kube-system/kube-proxy-7ccmv"
	Nov 23 10:14:28 newest-cni-499584 kubelet[1303]: I1123 10:14:28.964434    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8dace15f-cf56-4d36-9840-ceb07d85b8b0-xtables-lock\") pod \"kube-proxy-7ccmv\" (UID: \"8dace15f-cf56-4d36-9840-ceb07d85b8b0\") " pod="kube-system/kube-proxy-7ccmv"
	Nov 23 10:14:29 newest-cni-499584 kubelet[1303]: W1123 10:14:29.122459    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5/crio-60846d3e8e9928739ab167b2234f6f85c7454b1f089a543b85d13f4dbe9ee9c4 WatchSource:0}: Error finding container 60846d3e8e9928739ab167b2234f6f85c7454b1f089a543b85d13f4dbe9ee9c4: Status 404 returned error can't find the container with id 60846d3e8e9928739ab167b2234f6f85c7454b1f089a543b85d13f4dbe9ee9c4
	Nov 23 10:14:30 newest-cni-499584 kubelet[1303]: I1123 10:14:30.303280    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8pwmm" podStartSLOduration=2.303259486 podStartE2EDuration="2.303259486s" podCreationTimestamp="2025-11-23 10:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:14:30.279041011 +0000 UTC m=+6.281227140" watchObservedRunningTime="2025-11-23 10:14:30.303259486 +0000 UTC m=+6.305445599"
	Nov 23 10:14:30 newest-cni-499584 kubelet[1303]: I1123 10:14:30.460801    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7ccmv" podStartSLOduration=2.460769359 podStartE2EDuration="2.460769359s" podCreationTimestamp="2025-11-23 10:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:14:30.307936582 +0000 UTC m=+6.310122728" watchObservedRunningTime="2025-11-23 10:14:30.460769359 +0000 UTC m=+6.462955480"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-499584 -n newest-cni-499584
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-499584 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gpv4n storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-499584 describe pod coredns-66bc5c9577-gpv4n storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-499584 describe pod coredns-66bc5c9577-gpv4n storage-provisioner: exit status 1 (99.235062ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gpv4n" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-499584 describe pod coredns-66bc5c9577-gpv4n storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-499584 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-499584 --alsologtostderr -v=1: exit status 80 (1.993568325s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-499584 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:14:50.822543  536244 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:14:50.822778  536244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:14:50.822791  536244 out.go:374] Setting ErrFile to fd 2...
	I1123 10:14:50.822797  536244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:14:50.823174  536244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:14:50.823497  536244 out.go:368] Setting JSON to false
	I1123 10:14:50.823554  536244 mustload.go:66] Loading cluster: newest-cni-499584
	I1123 10:14:50.824258  536244 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:50.824959  536244 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:50.842929  536244 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:50.843354  536244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:14:50.908967  536244 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 10:14:50.89945866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:14:50.909839  536244 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-499584 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 10:14:50.915091  536244 out.go:179] * Pausing node newest-cni-499584 ... 
	I1123 10:14:50.917971  536244 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:50.918358  536244 ssh_runner.go:195] Run: systemctl --version
	I1123 10:14:50.918416  536244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:50.937729  536244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:51.044379  536244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:14:51.058560  536244 pause.go:52] kubelet running: true
	I1123 10:14:51.058648  536244 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:14:51.284705  536244 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:14:51.284857  536244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:14:51.363578  536244 cri.go:89] found id: "96f8dc4383cc8b499df13d61d76c65f07df3e2a0a27a63c934b98f7d5f3da1d7"
	I1123 10:14:51.363663  536244 cri.go:89] found id: "1ebbc83efd198176f3a140eea4f94119fc8ee22e821e82115ee102cb0de5c991"
	I1123 10:14:51.363685  536244 cri.go:89] found id: "55951d0d04b4f47b2d5b5cf62ccc211475fb562db7126dbfd7b727861257eac0"
	I1123 10:14:51.363706  536244 cri.go:89] found id: "29bf860a34581ef12a5c2e695cb5c4f9bee91e4dfc153fd656e57d8c48fa1f90"
	I1123 10:14:51.363743  536244 cri.go:89] found id: "5ba284305a4abd603bf7200240b618f02ce262119d49e43b0da6cf7313bbc7be"
	I1123 10:14:51.363755  536244 cri.go:89] found id: "7ef630ea40b951127d767ee2e09ebb4700a9b36e54474665707cf2be5860d032"
	I1123 10:14:51.363759  536244 cri.go:89] found id: ""
	I1123 10:14:51.363841  536244 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:14:51.375048  536244 retry.go:31] will retry after 330.820007ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:14:51Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:14:51.706661  536244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:14:51.721398  536244 pause.go:52] kubelet running: false
	I1123 10:14:51.721494  536244 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:14:51.890264  536244 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:14:51.890356  536244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:14:51.972875  536244 cri.go:89] found id: "96f8dc4383cc8b499df13d61d76c65f07df3e2a0a27a63c934b98f7d5f3da1d7"
	I1123 10:14:51.972901  536244 cri.go:89] found id: "1ebbc83efd198176f3a140eea4f94119fc8ee22e821e82115ee102cb0de5c991"
	I1123 10:14:51.972906  536244 cri.go:89] found id: "55951d0d04b4f47b2d5b5cf62ccc211475fb562db7126dbfd7b727861257eac0"
	I1123 10:14:51.972909  536244 cri.go:89] found id: "29bf860a34581ef12a5c2e695cb5c4f9bee91e4dfc153fd656e57d8c48fa1f90"
	I1123 10:14:51.972913  536244 cri.go:89] found id: "5ba284305a4abd603bf7200240b618f02ce262119d49e43b0da6cf7313bbc7be"
	I1123 10:14:51.972917  536244 cri.go:89] found id: "7ef630ea40b951127d767ee2e09ebb4700a9b36e54474665707cf2be5860d032"
	I1123 10:14:51.972920  536244 cri.go:89] found id: ""
	I1123 10:14:51.972969  536244 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:14:51.985180  536244 retry.go:31] will retry after 461.649252ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:14:51Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:14:52.447670  536244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:14:52.468301  536244 pause.go:52] kubelet running: false
	I1123 10:14:52.468388  536244 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:14:52.633820  536244 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:14:52.633908  536244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:14:52.709825  536244 cri.go:89] found id: "96f8dc4383cc8b499df13d61d76c65f07df3e2a0a27a63c934b98f7d5f3da1d7"
	I1123 10:14:52.709849  536244 cri.go:89] found id: "1ebbc83efd198176f3a140eea4f94119fc8ee22e821e82115ee102cb0de5c991"
	I1123 10:14:52.709854  536244 cri.go:89] found id: "55951d0d04b4f47b2d5b5cf62ccc211475fb562db7126dbfd7b727861257eac0"
	I1123 10:14:52.709859  536244 cri.go:89] found id: "29bf860a34581ef12a5c2e695cb5c4f9bee91e4dfc153fd656e57d8c48fa1f90"
	I1123 10:14:52.709862  536244 cri.go:89] found id: "5ba284305a4abd603bf7200240b618f02ce262119d49e43b0da6cf7313bbc7be"
	I1123 10:14:52.709867  536244 cri.go:89] found id: "7ef630ea40b951127d767ee2e09ebb4700a9b36e54474665707cf2be5860d032"
	I1123 10:14:52.709870  536244 cri.go:89] found id: ""
	I1123 10:14:52.709949  536244 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:14:52.724778  536244 out.go:203] 
	W1123 10:14:52.727813  536244 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:14:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:14:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:14:52.727839  536244 out.go:285] * 
	* 
	W1123 10:14:52.734854  536244 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:14:52.739697  536244 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-499584 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-499584
helpers_test.go:243: (dbg) docker inspect newest-cni-499584:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5",
	        "Created": "2025-11-23T10:13:53.150463538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 534451,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:14:35.588568101Z",
	            "FinishedAt": "2025-11-23T10:14:34.753824894Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5/hosts",
	        "LogPath": "/var/lib/docker/containers/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5-json.log",
	        "Name": "/newest-cni-499584",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-499584:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-499584",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5",
	                "LowerDir": "/var/lib/docker/overlay2/0ddfc19f10ceb1d022289b3c3394eb4fa72b02f60299c24da29cbd9e3855f5fb-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0ddfc19f10ceb1d022289b3c3394eb4fa72b02f60299c24da29cbd9e3855f5fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0ddfc19f10ceb1d022289b3c3394eb4fa72b02f60299c24da29cbd9e3855f5fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0ddfc19f10ceb1d022289b3c3394eb4fa72b02f60299c24da29cbd9e3855f5fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-499584",
	                "Source": "/var/lib/docker/volumes/newest-cni-499584/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-499584",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-499584",
	                "name.minikube.sigs.k8s.io": "newest-cni-499584",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccc72c12499e5184634ab555a2e446e413f75726b35eb2547a61f80ff41776e8",
	            "SandboxKey": "/var/run/docker/netns/ccc72c12499e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-499584": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:d8:ed:05:9e:a0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e27c561c33f1c11e6ad07d3f525986a08d52d1b7909a984158deea3644563840",
	                    "EndpointID": "4c67ed7a9563bfdddbc91b999cc084d7cb653f2ff3fe8982ede9e63d9d2e7bc5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-499584",
	                        "e79d7d886da1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499584 -n newest-cni-499584
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499584 -n newest-cni-499584: exit status 2 (381.610154ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-499584 logs -n 25
E1123 10:14:53.978710  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-499584 logs -n 25: (1.113748771s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-020224 image list --format=json                                                                                                                                                                                                    │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:11 UTC │
	│ pause   │ -p no-preload-020224 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │                     │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p disable-driver-mounts-097888                                                                                                                                                                                                               │ disable-driver-mounts-097888 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-566990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │                     │
	│ stop    │ -p embed-certs-566990 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ addons  │ enable dashboard -p embed-certs-566990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-330197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-330197 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ image   │ embed-certs-566990 image list --format=json                                                                                                                                                                                                   │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ pause   │ -p embed-certs-566990 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ delete  │ -p embed-certs-566990                                                                                                                                                                                                                         │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ delete  │ -p embed-certs-566990                                                                                                                                                                                                                         │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ start   │ -p newest-cni-499584 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-330197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-499584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │                     │
	│ stop    │ -p newest-cni-499584 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-499584 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ start   │ -p newest-cni-499584 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ image   │ newest-cni-499584 image list --format=json                                                                                                                                                                                                    │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ pause   │ -p newest-cni-499584 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:14:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:14:35.308429  534325 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:14:35.308606  534325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:14:35.308636  534325 out.go:374] Setting ErrFile to fd 2...
	I1123 10:14:35.308660  534325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:14:35.308962  534325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:14:35.309373  534325 out.go:368] Setting JSON to false
	I1123 10:14:35.310426  534325 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10624,"bootTime":1763882251,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:14:35.310533  534325 start.go:143] virtualization:  
	I1123 10:14:35.314130  534325 out.go:179] * [newest-cni-499584] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:14:35.318169  534325 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:14:35.318306  534325 notify.go:221] Checking for updates...
	I1123 10:14:35.324604  534325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:14:35.327708  534325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:14:35.330662  534325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:14:35.333670  534325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:14:35.336633  534325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:14:35.340028  534325 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:35.340614  534325 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:14:35.371141  534325 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:14:35.371271  534325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:14:35.436934  534325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:14:35.426581673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:14:35.437078  534325 docker.go:319] overlay module found
	I1123 10:14:35.440404  534325 out.go:179] * Using the docker driver based on existing profile
	I1123 10:14:35.443223  534325 start.go:309] selected driver: docker
	I1123 10:14:35.443245  534325 start.go:927] validating driver "docker" against &{Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:14:35.443371  534325 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:14:35.444083  534325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:14:35.497494  534325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:14:35.4876245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:14:35.497858  534325 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:14:35.497886  534325 cni.go:84] Creating CNI manager for ""
	I1123 10:14:35.497946  534325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:14:35.497992  534325 start.go:353] cluster config:
	{Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:14:35.503042  534325 out.go:179] * Starting "newest-cni-499584" primary control-plane node in "newest-cni-499584" cluster
	I1123 10:14:35.505793  534325 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:14:35.508757  534325 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:14:35.511894  534325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:14:35.511952  534325 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:14:35.511963  534325 cache.go:65] Caching tarball of preloaded images
	I1123 10:14:35.512065  534325 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:14:35.512076  534325 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:14:35.512186  534325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/config.json ...
	I1123 10:14:35.512284  534325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:14:35.537457  534325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:14:35.537482  534325 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:14:35.537505  534325 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:14:35.537537  534325 start.go:360] acquireMachinesLock for newest-cni-499584: {Name:mk060761daeb1a62836bf24a9b9e867393b1f580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:14:35.537611  534325 start.go:364] duration metric: took 51.693µs to acquireMachinesLock for "newest-cni-499584"
	I1123 10:14:35.537632  534325 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:14:35.537637  534325 fix.go:54] fixHost starting: 
	I1123 10:14:35.537894  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:35.553302  534325 fix.go:112] recreateIfNeeded on newest-cni-499584: state=Stopped err=<nil>
	W1123 10:14:35.553334  534325 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 10:14:35.397704  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	W1123 10:14:37.895831  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	I1123 10:14:35.556606  534325 out.go:252] * Restarting existing docker container for "newest-cni-499584" ...
	I1123 10:14:35.556691  534325 cli_runner.go:164] Run: docker start newest-cni-499584
	I1123 10:14:35.810485  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:35.841678  534325 kic.go:430] container "newest-cni-499584" state is running.
	I1123 10:14:35.842092  534325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:14:35.865371  534325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/config.json ...
	I1123 10:14:35.865636  534325 machine.go:94] provisionDockerMachine start ...
	I1123 10:14:35.865701  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:35.888754  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:35.889100  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:35.889118  534325 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:14:35.889757  534325 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:14:39.045105  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-499584
	
	I1123 10:14:39.045131  534325 ubuntu.go:182] provisioning hostname "newest-cni-499584"
	I1123 10:14:39.045248  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.063953  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:39.064263  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:39.064279  534325 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-499584 && echo "newest-cni-499584" | sudo tee /etc/hostname
	I1123 10:14:39.227033  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-499584
	
	I1123 10:14:39.227156  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.245578  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:39.245894  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:39.245917  534325 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-499584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-499584/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-499584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:14:39.399001  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:14:39.399024  534325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:14:39.399055  534325 ubuntu.go:190] setting up certificates
	I1123 10:14:39.399073  534325 provision.go:84] configureAuth start
	I1123 10:14:39.399131  534325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:14:39.416579  534325 provision.go:143] copyHostCerts
	I1123 10:14:39.416662  534325 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:14:39.416680  534325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:14:39.416761  534325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:14:39.416866  534325 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:14:39.416870  534325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:14:39.416899  534325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:14:39.416961  534325 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:14:39.416967  534325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:14:39.416990  534325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:14:39.417043  534325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.newest-cni-499584 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-499584]
	I1123 10:14:39.627598  534325 provision.go:177] copyRemoteCerts
	I1123 10:14:39.627689  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:14:39.627763  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.647970  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:39.757042  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:14:39.774686  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:14:39.794086  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:14:39.812248  534325 provision.go:87] duration metric: took 413.152366ms to configureAuth
	I1123 10:14:39.812274  534325 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:14:39.812473  534325 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:39.812587  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.831627  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:39.831936  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:39.831959  534325 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:14:40.202065  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:14:40.202163  534325 machine.go:97] duration metric: took 4.336516564s to provisionDockerMachine
	I1123 10:14:40.202198  534325 start.go:293] postStartSetup for "newest-cni-499584" (driver="docker")
	I1123 10:14:40.202228  534325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:14:40.202328  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:14:40.202388  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.221920  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.329811  534325 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:14:40.333732  534325 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:14:40.333763  534325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:14:40.333774  534325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:14:40.333829  534325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:14:40.333908  534325 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:14:40.334018  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:14:40.341710  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:14:40.360854  534325 start.go:296] duration metric: took 158.623442ms for postStartSetup
	I1123 10:14:40.360956  534325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:14:40.361017  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.378625  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.482834  534325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:14:40.488027  534325 fix.go:56] duration metric: took 4.950382019s for fixHost
	I1123 10:14:40.488055  534325 start.go:83] releasing machines lock for "newest-cni-499584", held for 4.950434147s
	I1123 10:14:40.488126  534325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:14:40.507445  534325 ssh_runner.go:195] Run: cat /version.json
	I1123 10:14:40.507515  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.507781  534325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:14:40.507851  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.526536  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.540744  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.633331  534325 ssh_runner.go:195] Run: systemctl --version
	I1123 10:14:40.735956  534325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:14:40.771974  534325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:14:40.776327  534325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:14:40.776407  534325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:14:40.784473  534325 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:14:40.784497  534325 start.go:496] detecting cgroup driver to use...
	I1123 10:14:40.784529  534325 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:14:40.784595  534325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:14:40.802198  534325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:14:40.815436  534325 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:14:40.815516  534325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:14:40.833773  534325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:14:40.847175  534325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:14:40.964031  534325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:14:41.079493  534325 docker.go:234] disabling docker service ...
	I1123 10:14:41.079592  534325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:14:41.099078  534325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:14:41.126997  534325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:14:41.246640  534325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:14:41.357516  534325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:14:41.371610  534325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:14:41.386018  534325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:14:41.386151  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.400270  534325 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:14:41.400380  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.410282  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.419540  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.429966  534325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:14:41.442207  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.452481  534325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.462238  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.472504  534325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:14:41.480544  534325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:14:41.488228  534325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:41.629522  534325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:14:41.808644  534325 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:14:41.808710  534325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:14:41.812474  534325 start.go:564] Will wait 60s for crictl version
	I1123 10:14:41.812551  534325 ssh_runner.go:195] Run: which crictl
	I1123 10:14:41.816298  534325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:14:41.846825  534325 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:14:41.846917  534325 ssh_runner.go:195] Run: crio --version
	I1123 10:14:41.875254  534325 ssh_runner.go:195] Run: crio --version
	I1123 10:14:41.910420  534325 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:14:41.913261  534325 cli_runner.go:164] Run: docker network inspect newest-cni-499584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:14:41.928711  534325 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:14:41.932661  534325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:14:41.945351  534325 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1123 10:14:39.896516  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	W1123 10:14:41.896853  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	I1123 10:14:41.948184  534325 kubeadm.go:884] updating cluster {Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:14:41.948333  534325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:14:41.948404  534325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:14:41.983037  534325 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:14:41.983059  534325 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:14:41.983122  534325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:14:42.012647  534325 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:14:42.012670  534325 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:14:42.012684  534325 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:14:42.012801  534325 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-499584 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:14:42.012913  534325 ssh_runner.go:195] Run: crio config
	I1123 10:14:42.085762  534325 cni.go:84] Creating CNI manager for ""
	I1123 10:14:42.085845  534325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:14:42.085887  534325 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 10:14:42.085940  534325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-499584 NodeName:newest-cni-499584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:14:42.086144  534325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-499584"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:14:42.086272  534325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:14:42.099771  534325 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:14:42.099872  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:14:42.109476  534325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:14:42.125982  534325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:14:42.143068  534325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1123 10:14:42.162444  534325 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:14:42.167161  534325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:14:42.179960  534325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:42.317087  534325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:14:42.336114  534325 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584 for IP: 192.168.76.2
	I1123 10:14:42.336139  534325 certs.go:195] generating shared ca certs ...
	I1123 10:14:42.336157  534325 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:42.336301  534325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:14:42.336359  534325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:14:42.336372  534325 certs.go:257] generating profile certs ...
	I1123 10:14:42.336466  534325 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/client.key
	I1123 10:14:42.336546  534325 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key.22d7de13
	I1123 10:14:42.336598  534325 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.key
	I1123 10:14:42.336725  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:14:42.336762  534325 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:14:42.336780  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:14:42.336809  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:14:42.336841  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:14:42.336874  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:14:42.336925  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:14:42.337678  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:14:42.363407  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:14:42.382696  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:14:42.404059  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:14:42.425457  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:14:42.449626  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:14:42.473741  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:14:42.503125  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:14:42.539613  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:14:42.564062  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:14:42.584177  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:14:42.606938  534325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:14:42.620683  534325 ssh_runner.go:195] Run: openssl version
	I1123 10:14:42.627518  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:14:42.636658  534325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:14:42.640680  534325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:14:42.640792  534325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:14:42.685708  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:14:42.696740  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:14:42.705991  534325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:14:42.709884  534325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:14:42.709962  534325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:14:42.750861  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:14:42.759122  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:14:42.767889  534325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:42.772006  534325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:42.772075  534325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:42.814113  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:14:42.823220  534325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:14:42.827234  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:14:42.869026  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:14:42.913588  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:14:42.969491  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:14:43.021834  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:14:43.089424  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:14:43.174105  534325 kubeadm.go:401] StartCluster: {Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:14:43.174258  534325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:14:43.174371  534325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:14:43.253263  534325 cri.go:89] found id: "55951d0d04b4f47b2d5b5cf62ccc211475fb562db7126dbfd7b727861257eac0"
	I1123 10:14:43.253335  534325 cri.go:89] found id: "29bf860a34581ef12a5c2e695cb5c4f9bee91e4dfc153fd656e57d8c48fa1f90"
	I1123 10:14:43.253355  534325 cri.go:89] found id: "5ba284305a4abd603bf7200240b618f02ce262119d49e43b0da6cf7313bbc7be"
	I1123 10:14:43.253381  534325 cri.go:89] found id: "7ef630ea40b951127d767ee2e09ebb4700a9b36e54474665707cf2be5860d032"
	I1123 10:14:43.253424  534325 cri.go:89] found id: ""
	I1123 10:14:43.253525  534325 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:14:43.275277  534325 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:14:43Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:14:43.275413  534325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:14:43.288015  534325 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:14:43.288090  534325 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:14:43.288184  534325 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:14:43.300089  534325 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:14:43.300769  534325 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-499584" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:14:43.301095  534325 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-499584" cluster setting kubeconfig missing "newest-cni-499584" context setting]
	I1123 10:14:43.301637  534325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:43.303525  534325 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:14:43.315357  534325 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:14:43.315438  534325 kubeadm.go:602] duration metric: took 27.319463ms to restartPrimaryControlPlane
	I1123 10:14:43.315462  534325 kubeadm.go:403] duration metric: took 141.368104ms to StartCluster
	I1123 10:14:43.315508  534325 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:43.315602  534325 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:14:43.316666  534325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:43.316959  534325 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:14:43.317718  534325 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:43.317697  534325 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:14:43.317796  534325 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-499584"
	I1123 10:14:43.317809  534325 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-499584"
	W1123 10:14:43.317818  534325 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:14:43.317823  534325 addons.go:70] Setting dashboard=true in profile "newest-cni-499584"
	I1123 10:14:43.317850  534325 addons.go:70] Setting default-storageclass=true in profile "newest-cni-499584"
	I1123 10:14:43.317863  534325 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-499584"
	I1123 10:14:43.317855  534325 addons.go:239] Setting addon dashboard=true in "newest-cni-499584"
	W1123 10:14:43.317900  534325 addons.go:248] addon dashboard should already be in state true
	I1123 10:14:43.317929  534325 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:43.318191  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.317844  534325 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:43.319197  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.319350  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.322853  534325 out.go:179] * Verifying Kubernetes components...
	I1123 10:14:43.326101  534325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:43.365483  534325 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:14:43.375618  534325 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:14:43.380627  534325 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:14:43.413785  529379 pod_ready.go:94] pod "coredns-66bc5c9577-pphv6" is "Ready"
	I1123 10:14:43.413811  529379 pod_ready.go:86] duration metric: took 36.023476036s for pod "coredns-66bc5c9577-pphv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.423051  529379 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.472047  529379 pod_ready.go:94] pod "etcd-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:43.472071  529379 pod_ready.go:86] duration metric: took 48.99566ms for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.480493  529379 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.497900  529379 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:43.497926  529379 pod_ready.go:86] duration metric: took 17.40953ms for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.501812  529379 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.594337  529379 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:43.594360  529379 pod_ready.go:86] duration metric: took 92.526931ms for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.793574  529379 pod_ready.go:83] waiting for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.193846  529379 pod_ready.go:94] pod "kube-proxy-75qqt" is "Ready"
	I1123 10:14:44.193870  529379 pod_ready.go:86] duration metric: took 400.271598ms for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.394519  529379 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.794600  529379 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:44.794624  529379 pod_ready.go:86] duration metric: took 400.080817ms for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.794638  529379 pod_ready.go:40] duration metric: took 37.408453068s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:14:44.888790  529379 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:14:44.892050  529379 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-330197" cluster and "default" namespace by default
	I1123 10:14:43.380644  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:14:43.380714  534325 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:14:43.380780  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:43.384089  534325 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:14:43.384112  534325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:14:43.384176  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:43.393141  534325 addons.go:239] Setting addon default-storageclass=true in "newest-cni-499584"
	W1123 10:14:43.393166  534325 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:14:43.393191  534325 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:43.393629  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.426018  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:43.454818  534325 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:14:43.454840  534325 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:14:43.454905  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:43.477805  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:43.499014  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:43.684857  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:14:43.684931  534325 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:14:43.710152  534325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:14:43.742808  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:14:43.742881  534325 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:14:43.748309  534325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:14:43.767832  534325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:14:43.777593  534325 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:14:43.777753  534325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:14:43.819844  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:14:43.819915  534325 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:14:43.906458  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:14:43.906530  534325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:14:43.961147  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:14:43.961219  534325 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:14:44.045109  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:14:44.045187  534325 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:14:44.085266  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:14:44.085342  534325 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:14:44.122397  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:14:44.122478  534325 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:14:44.149167  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:14:44.149243  534325 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:14:44.176014  534325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:14:49.874831  534325 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.126442363s)
	I1123 10:14:49.874895  534325 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.106997388s)
	I1123 10:14:49.875220  534325 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.097427038s)
	I1123 10:14:49.875249  534325 api_server.go:72] duration metric: took 6.558232564s to wait for apiserver process to appear ...
	I1123 10:14:49.875256  534325 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:14:49.875268  534325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:14:49.875552  534325 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.699461585s)
	I1123 10:14:49.878710  534325 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-499584 addons enable metrics-server
	
	I1123 10:14:49.899521  534325 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:14:49.901259  534325 api_server.go:141] control plane version: v1.34.1
	I1123 10:14:49.901287  534325 api_server.go:131] duration metric: took 26.024714ms to wait for apiserver health ...
	I1123 10:14:49.901296  534325 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:14:49.907916  534325 system_pods.go:59] 8 kube-system pods found
	I1123 10:14:49.907956  534325 system_pods.go:61] "coredns-66bc5c9577-gpv4n" [3ac78ff6-250d-4ce6-ba6f-913ba5a46be8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:14:49.907965  534325 system_pods.go:61] "etcd-newest-cni-499584" [fbc5fde9-9d75-41ee-a27e-bea9e43c5c1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:14:49.907971  534325 system_pods.go:61] "kindnet-8pwmm" [3933503c-90da-4b79-98e7-e4a22d58813d] Running
	I1123 10:14:49.907978  534325 system_pods.go:61] "kube-apiserver-newest-cni-499584" [2a4c121c-305b-4eef-8b3a-127a1fef8812] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:14:49.907985  534325 system_pods.go:61] "kube-controller-manager-newest-cni-499584" [c00e062c-870f-4ed7-a05d-615fc6c7d81d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:14:49.907989  534325 system_pods.go:61] "kube-proxy-7ccmv" [8dace15f-cf56-4d36-9840-ceb07d85b8b0] Running
	I1123 10:14:49.907995  534325 system_pods.go:61] "kube-scheduler-newest-cni-499584" [94684fe3-8d3e-4f48-9dad-6f0c6414f3c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:14:49.908028  534325 system_pods.go:61] "storage-provisioner" [70f72df9-2a87-468c-9f4c-2df81d587a29] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:14:49.908042  534325 system_pods.go:74] duration metric: took 6.740578ms to wait for pod list to return data ...
	I1123 10:14:49.908051  534325 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:14:49.908970  534325 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 10:14:49.911793  534325 addons.go:530] duration metric: took 6.594095134s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:14:49.916336  534325 default_sa.go:45] found service account: "default"
	I1123 10:14:49.916411  534325 default_sa.go:55] duration metric: took 8.348899ms for default service account to be created ...
	I1123 10:14:49.916442  534325 kubeadm.go:587] duration metric: took 6.59942349s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:14:49.916485  534325 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:14:49.919381  534325 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:14:49.919471  534325 node_conditions.go:123] node cpu capacity is 2
	I1123 10:14:49.919500  534325 node_conditions.go:105] duration metric: took 2.99226ms to run NodePressure ...
	I1123 10:14:49.919527  534325 start.go:242] waiting for startup goroutines ...
	I1123 10:14:49.919552  534325 start.go:247] waiting for cluster config update ...
	I1123 10:14:49.919581  534325 start.go:256] writing updated cluster config ...
	I1123 10:14:49.919880  534325 ssh_runner.go:195] Run: rm -f paused
	I1123 10:14:50.006509  534325 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:14:50.010080  534325 out.go:179] * Done! kubectl is now configured to use "newest-cni-499584" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.773736568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.786087556Z" level=info msg="Running pod sandbox: kube-system/kindnet-8pwmm/POD" id=1c211209-be46-49bc-9be1-393785774f5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.786341107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.806637016Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1c211209-be46-49bc-9be1-393785774f5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.807270035Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=40ed3a46-69df-43fd-a5b4-265318e69268 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.814153951Z" level=info msg="Ran pod sandbox be5a70ad752e05e4d82641a3ec0db4ab2476468c76ceb780bbbbd95d4637cd51 with infra container: kube-system/kindnet-8pwmm/POD" id=1c211209-be46-49bc-9be1-393785774f5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.841225302Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=44279a9a-7f0d-46c4-a4a1-d9dbd89bba4e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.844608469Z" level=info msg="Ran pod sandbox 260acb60db6b434fa427d4f4aca97487609e08897d6e1496dcc09be385345a8a with infra container: kube-system/kube-proxy-7ccmv/POD" id=40ed3a46-69df-43fd-a5b4-265318e69268 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.849080712Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=62755efc-8c24-4bf7-a315-1df1571fbbf1 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.850598094Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=92f63461-047a-49d4-9214-ba1aaaf703f2 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.854593135Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=773b8387-3646-4125-bb36-ff5f88d2e179 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.860598145Z" level=info msg="Creating container: kube-system/kindnet-8pwmm/kindnet-cni" id=25059575-85a2-4b96-9bcf-94105c5c8230 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.861540921Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.872494578Z" level=info msg="Creating container: kube-system/kube-proxy-7ccmv/kube-proxy" id=7e30b844-a9dc-485e-a895-8b46dbdd7fd3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.872683291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.887473965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.887864904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.8886772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.888836202Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.927785057Z" level=info msg="Created container 1ebbc83efd198176f3a140eea4f94119fc8ee22e821e82115ee102cb0de5c991: kube-system/kindnet-8pwmm/kindnet-cni" id=25059575-85a2-4b96-9bcf-94105c5c8230 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.929099589Z" level=info msg="Starting container: 1ebbc83efd198176f3a140eea4f94119fc8ee22e821e82115ee102cb0de5c991" id=f9ed459f-7e84-4b68-b929-14a09ce85a36 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.939090811Z" level=info msg="Started container" PID=1066 containerID=1ebbc83efd198176f3a140eea4f94119fc8ee22e821e82115ee102cb0de5c991 description=kube-system/kindnet-8pwmm/kindnet-cni id=f9ed459f-7e84-4b68-b929-14a09ce85a36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be5a70ad752e05e4d82641a3ec0db4ab2476468c76ceb780bbbbd95d4637cd51
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.945879865Z" level=info msg="Created container 96f8dc4383cc8b499df13d61d76c65f07df3e2a0a27a63c934b98f7d5f3da1d7: kube-system/kube-proxy-7ccmv/kube-proxy" id=7e30b844-a9dc-485e-a895-8b46dbdd7fd3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.947201773Z" level=info msg="Starting container: 96f8dc4383cc8b499df13d61d76c65f07df3e2a0a27a63c934b98f7d5f3da1d7" id=8a2d1876-774f-45c4-820b-c14303d93259 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.954250975Z" level=info msg="Started container" PID=1068 containerID=96f8dc4383cc8b499df13d61d76c65f07df3e2a0a27a63c934b98f7d5f3da1d7 description=kube-system/kube-proxy-7ccmv/kube-proxy id=8a2d1876-774f-45c4-820b-c14303d93259 name=/runtime.v1.RuntimeService/StartContainer sandboxID=260acb60db6b434fa427d4f4aca97487609e08897d6e1496dcc09be385345a8a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	96f8dc4383cc8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   260acb60db6b4       kube-proxy-7ccmv                            kube-system
	1ebbc83efd198       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               1                   be5a70ad752e0       kindnet-8pwmm                               kube-system
	55951d0d04b4f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            1                   0df82917046fd       kube-scheduler-newest-cni-499584            kube-system
	29bf860a34581       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   c5612f313bc2d       kube-apiserver-newest-cni-499584            kube-system
	5ba284305a4ab       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   9440c01235d71       etcd-newest-cni-499584                      kube-system
	7ef630ea40b95       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   4c6101ea8af12       kube-controller-manager-newest-cni-499584   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-499584
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-499584
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=newest-cni-499584
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_14_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:14:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-499584
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:14:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:14:48 +0000   Sun, 23 Nov 2025 10:14:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:14:48 +0000   Sun, 23 Nov 2025 10:14:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:14:48 +0000   Sun, 23 Nov 2025 10:14:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 10:14:48 +0000   Sun, 23 Nov 2025 10:14:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-499584
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                d0df54c2-215f-48c8-868a-6c3e0d8ae69f
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-499584                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         29s
	  kube-system                 kindnet-8pwmm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-499584             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-newest-cni-499584    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-7ccmv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-499584             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node newest-cni-499584 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 41s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 41s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node newest-cni-499584 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node newest-cni-499584 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     29s                kubelet          Node newest-cni-499584 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 29s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  29s                kubelet          Node newest-cni-499584 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    29s                kubelet          Node newest-cni-499584 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 29s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           26s                node-controller  Node newest-cni-499584 event: Registered Node newest-cni-499584 in Controller
	  Normal   Starting                 11s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node newest-cni-499584 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node newest-cni-499584 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x8 over 11s)  kubelet          Node newest-cni-499584 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-499584 event: Registered Node newest-cni-499584 in Controller
	
	
	==> dmesg <==
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	[Nov23 10:12] overlayfs: idmapped layers are currently not supported
	[ +16.378417] overlayfs: idmapped layers are currently not supported
	[Nov23 10:13] overlayfs: idmapped layers are currently not supported
	[Nov23 10:14] overlayfs: idmapped layers are currently not supported
	[ +29.685025] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5ba284305a4abd603bf7200240b618f02ce262119d49e43b0da6cf7313bbc7be] <==
	{"level":"warn","ts":"2025-11-23T10:14:46.809988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.842157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.853929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.875538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.907741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.926083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.926907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.944581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.977385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.988393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.012642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.031347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.048845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.080120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.104081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.130236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.154133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.178129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.202545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.215504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.241583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.267598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.294556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.306529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.381183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58248","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:14:54 up  2:57,  0 user,  load average: 5.96, 4.97, 3.83
	Linux newest-cni-499584 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1ebbc83efd198176f3a140eea4f94119fc8ee22e821e82115ee102cb0de5c991] <==
	I1123 10:14:49.063720       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:14:49.064083       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:14:49.065013       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:14:49.065040       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:14:49.065056       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:14:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:14:49.268873       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:14:49.268891       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:14:49.268899       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:14:49.269012       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [29bf860a34581ef12a5c2e695cb5c4f9bee91e4dfc153fd656e57d8c48fa1f90] <==
	I1123 10:14:48.437989       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 10:14:48.440905       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 10:14:48.440959       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 10:14:48.441715       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:14:48.442180       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:14:48.442192       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:14:48.442198       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:14:48.442204       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:14:48.447314       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 10:14:48.449769       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 10:14:48.449799       1 policy_source.go:240] refreshing policies
	E1123 10:14:48.454713       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 10:14:48.490109       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:14:48.575249       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:14:49.051600       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:14:49.324106       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:14:49.374038       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:14:49.416166       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:14:49.440166       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:14:49.594651       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.2.132"}
	I1123 10:14:49.632069       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.200.71"}
	I1123 10:14:52.107293       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:14:52.156534       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:14:52.204777       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:14:52.306280       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [7ef630ea40b951127d767ee2e09ebb4700a9b36e54474665707cf2be5860d032] <==
	I1123 10:14:51.718806       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 10:14:51.724596       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:14:51.726911       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:14:51.729222       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 10:14:51.739542       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 10:14:51.739640       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:14:51.743692       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 10:14:51.743751       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 10:14:51.743778       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 10:14:51.743782       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 10:14:51.743789       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 10:14:51.746669       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 10:14:51.749094       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:14:51.749227       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:14:51.749302       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:14:51.751957       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:14:51.752103       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-499584"
	I1123 10:14:51.752171       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 10:14:51.749800       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:14:51.749318       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:14:51.749648       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:14:51.753730       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:14:51.753745       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:14:51.749659       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:14:51.768573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [96f8dc4383cc8b499df13d61d76c65f07df3e2a0a27a63c934b98f7d5f3da1d7] <==
	I1123 10:14:49.034820       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:14:49.476698       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:14:49.578150       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:14:49.578227       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 10:14:49.578339       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:14:49.711356       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:14:49.711407       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:14:49.716733       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:14:49.718098       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:14:49.718121       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:14:49.719331       1 config.go:200] "Starting service config controller"
	I1123 10:14:49.719354       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:14:49.719383       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:14:49.719396       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:14:49.719406       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:14:49.719410       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:14:49.720042       1 config.go:309] "Starting node config controller"
	I1123 10:14:49.720060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:14:49.720076       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:14:49.822293       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:14:49.822369       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 10:14:49.822623       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [55951d0d04b4f47b2d5b5cf62ccc211475fb562db7126dbfd7b727861257eac0] <==
	I1123 10:14:46.862972       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:14:48.271556       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:14:48.271590       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:14:48.271600       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:14:48.271607       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:14:48.454543       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:14:48.454570       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:14:48.463471       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:14:48.463579       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:14:48.463598       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:14:48.463614       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:14:48.564961       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:14:45 newest-cni-499584 kubelet[733]: E1123 10:14:45.567768     733 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-499584\" not found" node="newest-cni-499584"
	Nov 23 10:14:46 newest-cni-499584 kubelet[733]: E1123 10:14:46.017903     733 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-499584\" not found" node="newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.263977     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.450798     733 apiserver.go:52] "Watching apiserver"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.476518     733 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.554366     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dace15f-cf56-4d36-9840-ceb07d85b8b0-lib-modules\") pod \"kube-proxy-7ccmv\" (UID: \"8dace15f-cf56-4d36-9840-ceb07d85b8b0\") " pod="kube-system/kube-proxy-7ccmv"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.554423     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3933503c-90da-4b79-98e7-e4a22d58813d-cni-cfg\") pod \"kindnet-8pwmm\" (UID: \"3933503c-90da-4b79-98e7-e4a22d58813d\") " pod="kube-system/kindnet-8pwmm"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.554443     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3933503c-90da-4b79-98e7-e4a22d58813d-xtables-lock\") pod \"kindnet-8pwmm\" (UID: \"3933503c-90da-4b79-98e7-e4a22d58813d\") " pod="kube-system/kindnet-8pwmm"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.554487     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8dace15f-cf56-4d36-9840-ceb07d85b8b0-xtables-lock\") pod \"kube-proxy-7ccmv\" (UID: \"8dace15f-cf56-4d36-9840-ceb07d85b8b0\") " pod="kube-system/kube-proxy-7ccmv"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.554512     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3933503c-90da-4b79-98e7-e4a22d58813d-lib-modules\") pod \"kindnet-8pwmm\" (UID: \"3933503c-90da-4b79-98e7-e4a22d58813d\") " pod="kube-system/kindnet-8pwmm"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: E1123 10:14:48.567090     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-499584\" already exists" pod="kube-system/etcd-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.567131     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.576368     733 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.576469     733 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.576497     733 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.577671     733 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.598786     733 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: E1123 10:14:48.613646     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-499584\" already exists" pod="kube-system/kube-apiserver-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.613680     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: E1123 10:14:48.641062     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-499584\" already exists" pod="kube-system/kube-controller-manager-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.641105     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: E1123 10:14:48.666851     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-499584\" already exists" pod="kube-system/kube-scheduler-newest-cni-499584"
	Nov 23 10:14:51 newest-cni-499584 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:14:51 newest-cni-499584 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:14:51 newest-cni-499584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-499584 -n newest-cni-499584
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-499584 -n newest-cni-499584: exit status 2 (368.892545ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-499584 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gpv4n storage-provisioner dashboard-metrics-scraper-6ffb444bf9-h8jzm kubernetes-dashboard-855c9754f9-dvcbz
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-499584 describe pod coredns-66bc5c9577-gpv4n storage-provisioner dashboard-metrics-scraper-6ffb444bf9-h8jzm kubernetes-dashboard-855c9754f9-dvcbz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-499584 describe pod coredns-66bc5c9577-gpv4n storage-provisioner dashboard-metrics-scraper-6ffb444bf9-h8jzm kubernetes-dashboard-855c9754f9-dvcbz: exit status 1 (92.741479ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gpv4n" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-h8jzm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-dvcbz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-499584 describe pod coredns-66bc5c9577-gpv4n storage-provisioner dashboard-metrics-scraper-6ffb444bf9-h8jzm kubernetes-dashboard-855c9754f9-dvcbz: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-499584
helpers_test.go:243: (dbg) docker inspect newest-cni-499584:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5",
	        "Created": "2025-11-23T10:13:53.150463538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 534451,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:14:35.588568101Z",
	            "FinishedAt": "2025-11-23T10:14:34.753824894Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5/hosts",
	        "LogPath": "/var/lib/docker/containers/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5/e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5-json.log",
	        "Name": "/newest-cni-499584",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-499584:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-499584",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e79d7d886da113c5b3dcdc53b315d8bfa48bf47c7593df6e9ff09a0d9d6c07f5",
	                "LowerDir": "/var/lib/docker/overlay2/0ddfc19f10ceb1d022289b3c3394eb4fa72b02f60299c24da29cbd9e3855f5fb-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0ddfc19f10ceb1d022289b3c3394eb4fa72b02f60299c24da29cbd9e3855f5fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0ddfc19f10ceb1d022289b3c3394eb4fa72b02f60299c24da29cbd9e3855f5fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0ddfc19f10ceb1d022289b3c3394eb4fa72b02f60299c24da29cbd9e3855f5fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-499584",
	                "Source": "/var/lib/docker/volumes/newest-cni-499584/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-499584",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-499584",
	                "name.minikube.sigs.k8s.io": "newest-cni-499584",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccc72c12499e5184634ab555a2e446e413f75726b35eb2547a61f80ff41776e8",
	            "SandboxKey": "/var/run/docker/netns/ccc72c12499e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33510"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-499584": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:d8:ed:05:9e:a0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e27c561c33f1c11e6ad07d3f525986a08d52d1b7909a984158deea3644563840",
	                    "EndpointID": "4c67ed7a9563bfdddbc91b999cc084d7cb653f2ff3fe8982ede9e63d9d2e7bc5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-499584",
	                        "e79d7d886da1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499584 -n newest-cni-499584
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499584 -n newest-cni-499584: exit status 2 (384.386424ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-499584 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-499584 logs -n 25: (1.158192964s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-020224 image list --format=json                                                                                                                                                                                                    │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:11 UTC │
	│ pause   │ -p no-preload-020224 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │                     │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:11 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p no-preload-020224                                                                                                                                                                                                                          │ no-preload-020224            │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ delete  │ -p disable-driver-mounts-097888                                                                                                                                                                                                               │ disable-driver-mounts-097888 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-566990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │                     │
	│ stop    │ -p embed-certs-566990 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ addons  │ enable dashboard -p embed-certs-566990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-330197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-330197 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ image   │ embed-certs-566990 image list --format=json                                                                                                                                                                                                   │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ pause   │ -p embed-certs-566990 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ delete  │ -p embed-certs-566990                                                                                                                                                                                                                         │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ delete  │ -p embed-certs-566990                                                                                                                                                                                                                         │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ start   │ -p newest-cni-499584 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-330197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-499584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │                     │
	│ stop    │ -p newest-cni-499584 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-499584 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ start   │ -p newest-cni-499584 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ image   │ newest-cni-499584 image list --format=json                                                                                                                                                                                                    │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ pause   │ -p newest-cni-499584 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:14:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:14:35.308429  534325 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:14:35.308606  534325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:14:35.308636  534325 out.go:374] Setting ErrFile to fd 2...
	I1123 10:14:35.308660  534325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:14:35.308962  534325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:14:35.309373  534325 out.go:368] Setting JSON to false
	I1123 10:14:35.310426  534325 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10624,"bootTime":1763882251,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:14:35.310533  534325 start.go:143] virtualization:  
	I1123 10:14:35.314130  534325 out.go:179] * [newest-cni-499584] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:14:35.318169  534325 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:14:35.318306  534325 notify.go:221] Checking for updates...
	I1123 10:14:35.324604  534325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:14:35.327708  534325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:14:35.330662  534325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:14:35.333670  534325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:14:35.336633  534325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:14:35.340028  534325 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:35.340614  534325 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:14:35.371141  534325 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:14:35.371271  534325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:14:35.436934  534325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:14:35.426581673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:14:35.437078  534325 docker.go:319] overlay module found
	I1123 10:14:35.440404  534325 out.go:179] * Using the docker driver based on existing profile
	I1123 10:14:35.443223  534325 start.go:309] selected driver: docker
	I1123 10:14:35.443245  534325 start.go:927] validating driver "docker" against &{Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:14:35.443371  534325 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:14:35.444083  534325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:14:35.497494  534325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:14:35.4876245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:14:35.497858  534325 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:14:35.497886  534325 cni.go:84] Creating CNI manager for ""
	I1123 10:14:35.497946  534325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:14:35.497992  534325 start.go:353] cluster config:
	{Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:14:35.503042  534325 out.go:179] * Starting "newest-cni-499584" primary control-plane node in "newest-cni-499584" cluster
	I1123 10:14:35.505793  534325 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:14:35.508757  534325 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:14:35.511894  534325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:14:35.511952  534325 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:14:35.511963  534325 cache.go:65] Caching tarball of preloaded images
	I1123 10:14:35.512065  534325 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:14:35.512076  534325 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:14:35.512186  534325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/config.json ...
	I1123 10:14:35.512284  534325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:14:35.537457  534325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:14:35.537482  534325 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:14:35.537505  534325 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:14:35.537537  534325 start.go:360] acquireMachinesLock for newest-cni-499584: {Name:mk060761daeb1a62836bf24a9b9e867393b1f580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:14:35.537611  534325 start.go:364] duration metric: took 51.693µs to acquireMachinesLock for "newest-cni-499584"
	I1123 10:14:35.537632  534325 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:14:35.537637  534325 fix.go:54] fixHost starting: 
	I1123 10:14:35.537894  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:35.553302  534325 fix.go:112] recreateIfNeeded on newest-cni-499584: state=Stopped err=<nil>
	W1123 10:14:35.553334  534325 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 10:14:35.397704  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	W1123 10:14:37.895831  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	I1123 10:14:35.556606  534325 out.go:252] * Restarting existing docker container for "newest-cni-499584" ...
	I1123 10:14:35.556691  534325 cli_runner.go:164] Run: docker start newest-cni-499584
	I1123 10:14:35.810485  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:35.841678  534325 kic.go:430] container "newest-cni-499584" state is running.
	I1123 10:14:35.842092  534325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:14:35.865371  534325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/config.json ...
	I1123 10:14:35.865636  534325 machine.go:94] provisionDockerMachine start ...
	I1123 10:14:35.865701  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:35.888754  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:35.889100  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:35.889118  534325 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:14:35.889757  534325 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:14:39.045105  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-499584
	
	I1123 10:14:39.045131  534325 ubuntu.go:182] provisioning hostname "newest-cni-499584"
	I1123 10:14:39.045248  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.063953  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:39.064263  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:39.064279  534325 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-499584 && echo "newest-cni-499584" | sudo tee /etc/hostname
	I1123 10:14:39.227033  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-499584
	
	I1123 10:14:39.227156  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.245578  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:39.245894  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:39.245917  534325 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-499584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-499584/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-499584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:14:39.399001  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:14:39.399024  534325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:14:39.399055  534325 ubuntu.go:190] setting up certificates
	I1123 10:14:39.399073  534325 provision.go:84] configureAuth start
	I1123 10:14:39.399131  534325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:14:39.416579  534325 provision.go:143] copyHostCerts
	I1123 10:14:39.416662  534325 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:14:39.416680  534325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:14:39.416761  534325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:14:39.416866  534325 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:14:39.416870  534325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:14:39.416899  534325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:14:39.416961  534325 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:14:39.416967  534325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:14:39.416990  534325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:14:39.417043  534325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.newest-cni-499584 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-499584]
	I1123 10:14:39.627598  534325 provision.go:177] copyRemoteCerts
	I1123 10:14:39.627689  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:14:39.627763  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.647970  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:39.757042  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:14:39.774686  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:14:39.794086  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:14:39.812248  534325 provision.go:87] duration metric: took 413.152366ms to configureAuth
	I1123 10:14:39.812274  534325 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:14:39.812473  534325 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:39.812587  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.831627  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:39.831936  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:39.831959  534325 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:14:40.202065  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:14:40.202163  534325 machine.go:97] duration metric: took 4.336516564s to provisionDockerMachine
	I1123 10:14:40.202198  534325 start.go:293] postStartSetup for "newest-cni-499584" (driver="docker")
	I1123 10:14:40.202228  534325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:14:40.202328  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:14:40.202388  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.221920  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.329811  534325 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:14:40.333732  534325 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:14:40.333763  534325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:14:40.333774  534325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:14:40.333829  534325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:14:40.333908  534325 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:14:40.334018  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:14:40.341710  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:14:40.360854  534325 start.go:296] duration metric: took 158.623442ms for postStartSetup
	I1123 10:14:40.360956  534325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:14:40.361017  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.378625  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.482834  534325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:14:40.488027  534325 fix.go:56] duration metric: took 4.950382019s for fixHost
	I1123 10:14:40.488055  534325 start.go:83] releasing machines lock for "newest-cni-499584", held for 4.950434147s
	I1123 10:14:40.488126  534325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:14:40.507445  534325 ssh_runner.go:195] Run: cat /version.json
	I1123 10:14:40.507515  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.507781  534325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:14:40.507851  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.526536  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.540744  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.633331  534325 ssh_runner.go:195] Run: systemctl --version
	I1123 10:14:40.735956  534325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:14:40.771974  534325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:14:40.776327  534325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:14:40.776407  534325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:14:40.784473  534325 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:14:40.784497  534325 start.go:496] detecting cgroup driver to use...
	I1123 10:14:40.784529  534325 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:14:40.784595  534325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:14:40.802198  534325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:14:40.815436  534325 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:14:40.815516  534325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:14:40.833773  534325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:14:40.847175  534325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:14:40.964031  534325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:14:41.079493  534325 docker.go:234] disabling docker service ...
	I1123 10:14:41.079592  534325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:14:41.099078  534325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:14:41.126997  534325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:14:41.246640  534325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:14:41.357516  534325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:14:41.371610  534325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:14:41.386018  534325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:14:41.386151  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.400270  534325 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:14:41.400380  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.410282  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.419540  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.429966  534325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:14:41.442207  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.452481  534325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.462238  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.472504  534325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:14:41.480544  534325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:14:41.488228  534325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:41.629522  534325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:14:41.808644  534325 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:14:41.808710  534325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:14:41.812474  534325 start.go:564] Will wait 60s for crictl version
	I1123 10:14:41.812551  534325 ssh_runner.go:195] Run: which crictl
	I1123 10:14:41.816298  534325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:14:41.846825  534325 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:14:41.846917  534325 ssh_runner.go:195] Run: crio --version
	I1123 10:14:41.875254  534325 ssh_runner.go:195] Run: crio --version
	I1123 10:14:41.910420  534325 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:14:41.913261  534325 cli_runner.go:164] Run: docker network inspect newest-cni-499584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:14:41.928711  534325 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:14:41.932661  534325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:14:41.945351  534325 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1123 10:14:39.896516  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	W1123 10:14:41.896853  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	I1123 10:14:41.948184  534325 kubeadm.go:884] updating cluster {Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:14:41.948333  534325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:14:41.948404  534325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:14:41.983037  534325 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:14:41.983059  534325 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:14:41.983122  534325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:14:42.012647  534325 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:14:42.012670  534325 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:14:42.012684  534325 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:14:42.012801  534325 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-499584 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:14:42.012913  534325 ssh_runner.go:195] Run: crio config
	I1123 10:14:42.085762  534325 cni.go:84] Creating CNI manager for ""
	I1123 10:14:42.085845  534325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:14:42.085887  534325 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 10:14:42.085940  534325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-499584 NodeName:newest-cni-499584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:14:42.086144  534325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-499584"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:14:42.086272  534325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:14:42.099771  534325 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:14:42.099872  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:14:42.109476  534325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:14:42.125982  534325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:14:42.143068  534325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1123 10:14:42.162444  534325 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:14:42.167161  534325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:14:42.179960  534325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:42.317087  534325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:14:42.336114  534325 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584 for IP: 192.168.76.2
	I1123 10:14:42.336139  534325 certs.go:195] generating shared ca certs ...
	I1123 10:14:42.336157  534325 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:42.336301  534325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:14:42.336359  534325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:14:42.336372  534325 certs.go:257] generating profile certs ...
	I1123 10:14:42.336466  534325 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/client.key
	I1123 10:14:42.336546  534325 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key.22d7de13
	I1123 10:14:42.336598  534325 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.key
	I1123 10:14:42.336725  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:14:42.336762  534325 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:14:42.336780  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:14:42.336809  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:14:42.336841  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:14:42.336874  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:14:42.336925  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:14:42.337678  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:14:42.363407  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:14:42.382696  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:14:42.404059  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:14:42.425457  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:14:42.449626  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:14:42.473741  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:14:42.503125  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:14:42.539613  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:14:42.564062  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:14:42.584177  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:14:42.606938  534325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:14:42.620683  534325 ssh_runner.go:195] Run: openssl version
	I1123 10:14:42.627518  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:14:42.636658  534325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:14:42.640680  534325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:14:42.640792  534325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:14:42.685708  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:14:42.696740  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:14:42.705991  534325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:14:42.709884  534325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:14:42.709962  534325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:14:42.750861  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:14:42.759122  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:14:42.767889  534325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:42.772006  534325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:42.772075  534325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:42.814113  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:14:42.823220  534325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:14:42.827234  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:14:42.869026  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:14:42.913588  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:14:42.969491  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:14:43.021834  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:14:43.089424  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:14:43.174105  534325 kubeadm.go:401] StartCluster: {Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:14:43.174258  534325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:14:43.174371  534325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:14:43.253263  534325 cri.go:89] found id: "55951d0d04b4f47b2d5b5cf62ccc211475fb562db7126dbfd7b727861257eac0"
	I1123 10:14:43.253335  534325 cri.go:89] found id: "29bf860a34581ef12a5c2e695cb5c4f9bee91e4dfc153fd656e57d8c48fa1f90"
	I1123 10:14:43.253355  534325 cri.go:89] found id: "5ba284305a4abd603bf7200240b618f02ce262119d49e43b0da6cf7313bbc7be"
	I1123 10:14:43.253381  534325 cri.go:89] found id: "7ef630ea40b951127d767ee2e09ebb4700a9b36e54474665707cf2be5860d032"
	I1123 10:14:43.253424  534325 cri.go:89] found id: ""
	I1123 10:14:43.253525  534325 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:14:43.275277  534325 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:14:43Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:14:43.275413  534325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:14:43.288015  534325 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:14:43.288090  534325 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:14:43.288184  534325 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:14:43.300089  534325 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:14:43.300769  534325 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-499584" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:14:43.301095  534325 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-499584" cluster setting kubeconfig missing "newest-cni-499584" context setting]
	I1123 10:14:43.301637  534325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:43.303525  534325 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:14:43.315357  534325 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:14:43.315438  534325 kubeadm.go:602] duration metric: took 27.319463ms to restartPrimaryControlPlane
	I1123 10:14:43.315462  534325 kubeadm.go:403] duration metric: took 141.368104ms to StartCluster
	I1123 10:14:43.315508  534325 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:43.315602  534325 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:14:43.316666  534325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:43.316959  534325 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:14:43.317718  534325 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:43.317697  534325 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:14:43.317796  534325 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-499584"
	I1123 10:14:43.317809  534325 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-499584"
	W1123 10:14:43.317818  534325 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:14:43.317823  534325 addons.go:70] Setting dashboard=true in profile "newest-cni-499584"
	I1123 10:14:43.317850  534325 addons.go:70] Setting default-storageclass=true in profile "newest-cni-499584"
	I1123 10:14:43.317863  534325 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-499584"
	I1123 10:14:43.317855  534325 addons.go:239] Setting addon dashboard=true in "newest-cni-499584"
	W1123 10:14:43.317900  534325 addons.go:248] addon dashboard should already be in state true
	I1123 10:14:43.317929  534325 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:43.318191  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.317844  534325 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:43.319197  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.319350  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.322853  534325 out.go:179] * Verifying Kubernetes components...
	I1123 10:14:43.326101  534325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:43.365483  534325 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:14:43.375618  534325 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:14:43.380627  534325 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:14:43.413785  529379 pod_ready.go:94] pod "coredns-66bc5c9577-pphv6" is "Ready"
	I1123 10:14:43.413811  529379 pod_ready.go:86] duration metric: took 36.023476036s for pod "coredns-66bc5c9577-pphv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.423051  529379 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.472047  529379 pod_ready.go:94] pod "etcd-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:43.472071  529379 pod_ready.go:86] duration metric: took 48.99566ms for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.480493  529379 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.497900  529379 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:43.497926  529379 pod_ready.go:86] duration metric: took 17.40953ms for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.501812  529379 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.594337  529379 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:43.594360  529379 pod_ready.go:86] duration metric: took 92.526931ms for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.793574  529379 pod_ready.go:83] waiting for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.193846  529379 pod_ready.go:94] pod "kube-proxy-75qqt" is "Ready"
	I1123 10:14:44.193870  529379 pod_ready.go:86] duration metric: took 400.271598ms for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.394519  529379 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.794600  529379 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:44.794624  529379 pod_ready.go:86] duration metric: took 400.080817ms for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.794638  529379 pod_ready.go:40] duration metric: took 37.408453068s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:14:44.888790  529379 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:14:44.892050  529379 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-330197" cluster and "default" namespace by default
	I1123 10:14:43.380644  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:14:43.380714  534325 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:14:43.380780  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:43.384089  534325 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:14:43.384112  534325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:14:43.384176  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:43.393141  534325 addons.go:239] Setting addon default-storageclass=true in "newest-cni-499584"
	W1123 10:14:43.393166  534325 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:14:43.393191  534325 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:43.393629  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.426018  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:43.454818  534325 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:14:43.454840  534325 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:14:43.454905  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:43.477805  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:43.499014  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:43.684857  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:14:43.684931  534325 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:14:43.710152  534325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:14:43.742808  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:14:43.742881  534325 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:14:43.748309  534325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:14:43.767832  534325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:14:43.777593  534325 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:14:43.777753  534325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:14:43.819844  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:14:43.819915  534325 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:14:43.906458  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:14:43.906530  534325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:14:43.961147  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:14:43.961219  534325 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:14:44.045109  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:14:44.045187  534325 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:14:44.085266  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:14:44.085342  534325 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:14:44.122397  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:14:44.122478  534325 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:14:44.149167  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:14:44.149243  534325 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:14:44.176014  534325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:14:49.874831  534325 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.126442363s)
	I1123 10:14:49.874895  534325 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.106997388s)
	I1123 10:14:49.875220  534325 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.097427038s)
	I1123 10:14:49.875249  534325 api_server.go:72] duration metric: took 6.558232564s to wait for apiserver process to appear ...
	I1123 10:14:49.875256  534325 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:14:49.875268  534325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:14:49.875552  534325 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.699461585s)
	I1123 10:14:49.878710  534325 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-499584 addons enable metrics-server
	
	I1123 10:14:49.899521  534325 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:14:49.901259  534325 api_server.go:141] control plane version: v1.34.1
	I1123 10:14:49.901287  534325 api_server.go:131] duration metric: took 26.024714ms to wait for apiserver health ...
	I1123 10:14:49.901296  534325 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:14:49.907916  534325 system_pods.go:59] 8 kube-system pods found
	I1123 10:14:49.907956  534325 system_pods.go:61] "coredns-66bc5c9577-gpv4n" [3ac78ff6-250d-4ce6-ba6f-913ba5a46be8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:14:49.907965  534325 system_pods.go:61] "etcd-newest-cni-499584" [fbc5fde9-9d75-41ee-a27e-bea9e43c5c1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:14:49.907971  534325 system_pods.go:61] "kindnet-8pwmm" [3933503c-90da-4b79-98e7-e4a22d58813d] Running
	I1123 10:14:49.907978  534325 system_pods.go:61] "kube-apiserver-newest-cni-499584" [2a4c121c-305b-4eef-8b3a-127a1fef8812] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:14:49.907985  534325 system_pods.go:61] "kube-controller-manager-newest-cni-499584" [c00e062c-870f-4ed7-a05d-615fc6c7d81d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:14:49.907989  534325 system_pods.go:61] "kube-proxy-7ccmv" [8dace15f-cf56-4d36-9840-ceb07d85b8b0] Running
	I1123 10:14:49.907995  534325 system_pods.go:61] "kube-scheduler-newest-cni-499584" [94684fe3-8d3e-4f48-9dad-6f0c6414f3c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:14:49.908028  534325 system_pods.go:61] "storage-provisioner" [70f72df9-2a87-468c-9f4c-2df81d587a29] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:14:49.908042  534325 system_pods.go:74] duration metric: took 6.740578ms to wait for pod list to return data ...
	I1123 10:14:49.908051  534325 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:14:49.908970  534325 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 10:14:49.911793  534325 addons.go:530] duration metric: took 6.594095134s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:14:49.916336  534325 default_sa.go:45] found service account: "default"
	I1123 10:14:49.916411  534325 default_sa.go:55] duration metric: took 8.348899ms for default service account to be created ...
	I1123 10:14:49.916442  534325 kubeadm.go:587] duration metric: took 6.59942349s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:14:49.916485  534325 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:14:49.919381  534325 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:14:49.919471  534325 node_conditions.go:123] node cpu capacity is 2
	I1123 10:14:49.919500  534325 node_conditions.go:105] duration metric: took 2.99226ms to run NodePressure ...
	I1123 10:14:49.919527  534325 start.go:242] waiting for startup goroutines ...
	I1123 10:14:49.919552  534325 start.go:247] waiting for cluster config update ...
	I1123 10:14:49.919581  534325 start.go:256] writing updated cluster config ...
	I1123 10:14:49.919880  534325 ssh_runner.go:195] Run: rm -f paused
	I1123 10:14:50.006509  534325 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:14:50.010080  534325 out.go:179] * Done! kubectl is now configured to use "newest-cni-499584" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.773736568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.786087556Z" level=info msg="Running pod sandbox: kube-system/kindnet-8pwmm/POD" id=1c211209-be46-49bc-9be1-393785774f5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.786341107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.806637016Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1c211209-be46-49bc-9be1-393785774f5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.807270035Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=40ed3a46-69df-43fd-a5b4-265318e69268 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.814153951Z" level=info msg="Ran pod sandbox be5a70ad752e05e4d82641a3ec0db4ab2476468c76ceb780bbbbd95d4637cd51 with infra container: kube-system/kindnet-8pwmm/POD" id=1c211209-be46-49bc-9be1-393785774f5e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.841225302Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=44279a9a-7f0d-46c4-a4a1-d9dbd89bba4e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.844608469Z" level=info msg="Ran pod sandbox 260acb60db6b434fa427d4f4aca97487609e08897d6e1496dcc09be385345a8a with infra container: kube-system/kube-proxy-7ccmv/POD" id=40ed3a46-69df-43fd-a5b4-265318e69268 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.849080712Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=62755efc-8c24-4bf7-a315-1df1571fbbf1 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.850598094Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=92f63461-047a-49d4-9214-ba1aaaf703f2 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.854593135Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=773b8387-3646-4125-bb36-ff5f88d2e179 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.860598145Z" level=info msg="Creating container: kube-system/kindnet-8pwmm/kindnet-cni" id=25059575-85a2-4b96-9bcf-94105c5c8230 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.861540921Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.872494578Z" level=info msg="Creating container: kube-system/kube-proxy-7ccmv/kube-proxy" id=7e30b844-a9dc-485e-a895-8b46dbdd7fd3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.872683291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.887473965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.887864904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.8886772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.888836202Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.927785057Z" level=info msg="Created container 1ebbc83efd198176f3a140eea4f94119fc8ee22e821e82115ee102cb0de5c991: kube-system/kindnet-8pwmm/kindnet-cni" id=25059575-85a2-4b96-9bcf-94105c5c8230 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.929099589Z" level=info msg="Starting container: 1ebbc83efd198176f3a140eea4f94119fc8ee22e821e82115ee102cb0de5c991" id=f9ed459f-7e84-4b68-b929-14a09ce85a36 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.939090811Z" level=info msg="Started container" PID=1066 containerID=1ebbc83efd198176f3a140eea4f94119fc8ee22e821e82115ee102cb0de5c991 description=kube-system/kindnet-8pwmm/kindnet-cni id=f9ed459f-7e84-4b68-b929-14a09ce85a36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be5a70ad752e05e4d82641a3ec0db4ab2476468c76ceb780bbbbd95d4637cd51
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.945879865Z" level=info msg="Created container 96f8dc4383cc8b499df13d61d76c65f07df3e2a0a27a63c934b98f7d5f3da1d7: kube-system/kube-proxy-7ccmv/kube-proxy" id=7e30b844-a9dc-485e-a895-8b46dbdd7fd3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.947201773Z" level=info msg="Starting container: 96f8dc4383cc8b499df13d61d76c65f07df3e2a0a27a63c934b98f7d5f3da1d7" id=8a2d1876-774f-45c4-820b-c14303d93259 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:14:48 newest-cni-499584 crio[613]: time="2025-11-23T10:14:48.954250975Z" level=info msg="Started container" PID=1068 containerID=96f8dc4383cc8b499df13d61d76c65f07df3e2a0a27a63c934b98f7d5f3da1d7 description=kube-system/kube-proxy-7ccmv/kube-proxy id=8a2d1876-774f-45c4-820b-c14303d93259 name=/runtime.v1.RuntimeService/StartContainer sandboxID=260acb60db6b434fa427d4f4aca97487609e08897d6e1496dcc09be385345a8a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	96f8dc4383cc8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   260acb60db6b4       kube-proxy-7ccmv                            kube-system
	1ebbc83efd198       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   be5a70ad752e0       kindnet-8pwmm                               kube-system
	55951d0d04b4f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   0df82917046fd       kube-scheduler-newest-cni-499584            kube-system
	29bf860a34581       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   c5612f313bc2d       kube-apiserver-newest-cni-499584            kube-system
	5ba284305a4ab       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   9440c01235d71       etcd-newest-cni-499584                      kube-system
	7ef630ea40b95       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   4c6101ea8af12       kube-controller-manager-newest-cni-499584   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-499584
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-499584
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=newest-cni-499584
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_14_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:14:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-499584
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:14:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:14:48 +0000   Sun, 23 Nov 2025 10:14:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:14:48 +0000   Sun, 23 Nov 2025 10:14:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:14:48 +0000   Sun, 23 Nov 2025 10:14:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 10:14:48 +0000   Sun, 23 Nov 2025 10:14:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-499584
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                d0df54c2-215f-48c8-868a-6c3e0d8ae69f
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-499584                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-8pwmm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-499584             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-499584    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-7ccmv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-499584             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node newest-cni-499584 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 44s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node newest-cni-499584 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node newest-cni-499584 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-499584 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-499584 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-499584 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-499584 event: Registered Node newest-cni-499584 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-499584 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-499584 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-499584 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-499584 event: Registered Node newest-cni-499584 in Controller
	
	
	==> dmesg <==
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	[Nov23 10:12] overlayfs: idmapped layers are currently not supported
	[ +16.378417] overlayfs: idmapped layers are currently not supported
	[Nov23 10:13] overlayfs: idmapped layers are currently not supported
	[Nov23 10:14] overlayfs: idmapped layers are currently not supported
	[ +29.685025] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5ba284305a4abd603bf7200240b618f02ce262119d49e43b0da6cf7313bbc7be] <==
	{"level":"warn","ts":"2025-11-23T10:14:46.809988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.842157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.853929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.875538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.907741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.926083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.926907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.944581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.977385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:46.988393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.012642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.031347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.048845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.080120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.104081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.130236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.154133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.178129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.202545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.215504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.241583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.267598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.294556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.306529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:47.381183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58248","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:14:56 up  2:57,  0 user,  load average: 5.48, 4.88, 3.81
	Linux newest-cni-499584 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1ebbc83efd198176f3a140eea4f94119fc8ee22e821e82115ee102cb0de5c991] <==
	I1123 10:14:49.063720       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:14:49.064083       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:14:49.065013       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:14:49.065040       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:14:49.065056       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:14:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:14:49.268873       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:14:49.268891       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:14:49.268899       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:14:49.269012       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [29bf860a34581ef12a5c2e695cb5c4f9bee91e4dfc153fd656e57d8c48fa1f90] <==
	I1123 10:14:48.437989       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 10:14:48.440905       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 10:14:48.440959       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 10:14:48.441715       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:14:48.442180       1 aggregator.go:171] initial CRD sync complete...
	I1123 10:14:48.442192       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 10:14:48.442198       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:14:48.442204       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:14:48.447314       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 10:14:48.449769       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 10:14:48.449799       1 policy_source.go:240] refreshing policies
	E1123 10:14:48.454713       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 10:14:48.490109       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:14:48.575249       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:14:49.051600       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:14:49.324106       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:14:49.374038       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:14:49.416166       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:14:49.440166       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:14:49.594651       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.2.132"}
	I1123 10:14:49.632069       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.200.71"}
	I1123 10:14:52.107293       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:14:52.156534       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:14:52.204777       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:14:52.306280       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [7ef630ea40b951127d767ee2e09ebb4700a9b36e54474665707cf2be5860d032] <==
	I1123 10:14:51.718806       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 10:14:51.724596       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:14:51.726911       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:14:51.729222       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 10:14:51.739542       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 10:14:51.739640       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:14:51.743692       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 10:14:51.743751       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 10:14:51.743778       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 10:14:51.743782       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 10:14:51.743789       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 10:14:51.746669       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 10:14:51.749094       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:14:51.749227       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:14:51.749302       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:14:51.751957       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:14:51.752103       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-499584"
	I1123 10:14:51.752171       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 10:14:51.749800       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:14:51.749318       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:14:51.749648       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:14:51.753730       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:14:51.753745       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 10:14:51.749659       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:14:51.768573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [96f8dc4383cc8b499df13d61d76c65f07df3e2a0a27a63c934b98f7d5f3da1d7] <==
	I1123 10:14:49.034820       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:14:49.476698       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:14:49.578150       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:14:49.578227       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 10:14:49.578339       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:14:49.711356       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:14:49.711407       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:14:49.716733       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:14:49.718098       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:14:49.718121       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:14:49.719331       1 config.go:200] "Starting service config controller"
	I1123 10:14:49.719354       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:14:49.719383       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:14:49.719396       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:14:49.719406       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:14:49.719410       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:14:49.720042       1 config.go:309] "Starting node config controller"
	I1123 10:14:49.720060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:14:49.720076       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:14:49.822293       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:14:49.822369       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 10:14:49.822623       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [55951d0d04b4f47b2d5b5cf62ccc211475fb562db7126dbfd7b727861257eac0] <==
	I1123 10:14:46.862972       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:14:48.271556       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:14:48.271590       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:14:48.271600       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:14:48.271607       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:14:48.454543       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:14:48.454570       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:14:48.463471       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:14:48.463579       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:14:48.463598       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:14:48.463614       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:14:48.564961       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:14:45 newest-cni-499584 kubelet[733]: E1123 10:14:45.567768     733 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-499584\" not found" node="newest-cni-499584"
	Nov 23 10:14:46 newest-cni-499584 kubelet[733]: E1123 10:14:46.017903     733 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-499584\" not found" node="newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.263977     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.450798     733 apiserver.go:52] "Watching apiserver"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.476518     733 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.554366     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dace15f-cf56-4d36-9840-ceb07d85b8b0-lib-modules\") pod \"kube-proxy-7ccmv\" (UID: \"8dace15f-cf56-4d36-9840-ceb07d85b8b0\") " pod="kube-system/kube-proxy-7ccmv"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.554423     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3933503c-90da-4b79-98e7-e4a22d58813d-cni-cfg\") pod \"kindnet-8pwmm\" (UID: \"3933503c-90da-4b79-98e7-e4a22d58813d\") " pod="kube-system/kindnet-8pwmm"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.554443     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3933503c-90da-4b79-98e7-e4a22d58813d-xtables-lock\") pod \"kindnet-8pwmm\" (UID: \"3933503c-90da-4b79-98e7-e4a22d58813d\") " pod="kube-system/kindnet-8pwmm"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.554487     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8dace15f-cf56-4d36-9840-ceb07d85b8b0-xtables-lock\") pod \"kube-proxy-7ccmv\" (UID: \"8dace15f-cf56-4d36-9840-ceb07d85b8b0\") " pod="kube-system/kube-proxy-7ccmv"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.554512     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3933503c-90da-4b79-98e7-e4a22d58813d-lib-modules\") pod \"kindnet-8pwmm\" (UID: \"3933503c-90da-4b79-98e7-e4a22d58813d\") " pod="kube-system/kindnet-8pwmm"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: E1123 10:14:48.567090     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-499584\" already exists" pod="kube-system/etcd-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.567131     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.576368     733 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.576469     733 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.576497     733 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.577671     733 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.598786     733 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: E1123 10:14:48.613646     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-499584\" already exists" pod="kube-system/kube-apiserver-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.613680     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: E1123 10:14:48.641062     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-499584\" already exists" pod="kube-system/kube-controller-manager-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: I1123 10:14:48.641105     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-499584"
	Nov 23 10:14:48 newest-cni-499584 kubelet[733]: E1123 10:14:48.666851     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-499584\" already exists" pod="kube-system/kube-scheduler-newest-cni-499584"
	Nov 23 10:14:51 newest-cni-499584 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:14:51 newest-cni-499584 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:14:51 newest-cni-499584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-499584 -n newest-cni-499584
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-499584 -n newest-cni-499584: exit status 2 (435.668771ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-499584 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gpv4n storage-provisioner dashboard-metrics-scraper-6ffb444bf9-h8jzm kubernetes-dashboard-855c9754f9-dvcbz
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-499584 describe pod coredns-66bc5c9577-gpv4n storage-provisioner dashboard-metrics-scraper-6ffb444bf9-h8jzm kubernetes-dashboard-855c9754f9-dvcbz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-499584 describe pod coredns-66bc5c9577-gpv4n storage-provisioner dashboard-metrics-scraper-6ffb444bf9-h8jzm kubernetes-dashboard-855c9754f9-dvcbz: exit status 1 (94.142296ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gpv4n" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-h8jzm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-dvcbz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-499584 describe pod coredns-66bc5c9577-gpv4n storage-provisioner dashboard-metrics-scraper-6ffb444bf9-h8jzm kubernetes-dashboard-855c9754f9-dvcbz: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-330197 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-330197 --alsologtostderr -v=1: exit status 80 (2.351919249s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-330197 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:14:57.045727  537226 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:14:57.045951  537226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:14:57.045982  537226 out.go:374] Setting ErrFile to fd 2...
	I1123 10:14:57.046003  537226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:14:57.046318  537226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:14:57.046601  537226 out.go:368] Setting JSON to false
	I1123 10:14:57.046656  537226 mustload.go:66] Loading cluster: default-k8s-diff-port-330197
	I1123 10:14:57.047075  537226 config.go:182] Loaded profile config "default-k8s-diff-port-330197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:57.047566  537226 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-330197 --format={{.State.Status}}
	I1123 10:14:57.090179  537226 host.go:66] Checking if "default-k8s-diff-port-330197" exists ...
	I1123 10:14:57.090480  537226 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:14:57.170668  537226 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 10:14:57.159515568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:14:57.171432  537226 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-330197 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 10:14:57.174881  537226 out.go:179] * Pausing node default-k8s-diff-port-330197 ... 
	I1123 10:14:57.177779  537226 host.go:66] Checking if "default-k8s-diff-port-330197" exists ...
	I1123 10:14:57.178176  537226 ssh_runner.go:195] Run: systemctl --version
	I1123 10:14:57.178224  537226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-330197
	I1123 10:14:57.211035  537226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/default-k8s-diff-port-330197/id_rsa Username:docker}
	I1123 10:14:57.317193  537226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:14:57.351646  537226 pause.go:52] kubelet running: true
	I1123 10:14:57.351724  537226 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:14:57.657447  537226 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:14:57.657530  537226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:14:57.762830  537226 cri.go:89] found id: "87de444d24b2febb76dda7b50f414db46d89c0d8fc63cbf46209d99a0e01672d"
	I1123 10:14:57.762857  537226 cri.go:89] found id: "97276357e27cf30604562b859301a5b21e5e2d2302ad432fc575ea7916ac030f"
	I1123 10:14:57.762862  537226 cri.go:89] found id: "89a95d5fe671d4bfa2f423fba99ec7d957ff464f9e3b91c5863c5d7913e94d04"
	I1123 10:14:57.762866  537226 cri.go:89] found id: "fab3c1340bb9fc1913f94cded9a1f0fba5136e42c2593fb7823cb21f94b031c1"
	I1123 10:14:57.762869  537226 cri.go:89] found id: "60df752bda6db8b92e0f147182a3bba7647274349456a921663b7a71421bb064"
	I1123 10:14:57.762873  537226 cri.go:89] found id: "42cc19608c6e58ebf338dc82a991b4cd9902c09d76a2fc3ad1709fb98fe71f1c"
	I1123 10:14:57.762876  537226 cri.go:89] found id: "f6adced2438dde36562063e35389aaa6f93406583a489e9200e01abeac6d2ba2"
	I1123 10:14:57.762880  537226 cri.go:89] found id: "49080a105e3a1028d971c78fae51a027ca689e779aae2b400ed02b743c540042"
	I1123 10:14:57.762883  537226 cri.go:89] found id: "fe2851bd5d0e209023685855c54c561683dab32a8f4e2ac4aad2e94044d6da28"
	I1123 10:14:57.762890  537226 cri.go:89] found id: "991ccbc0c6f8554770578bb5b28255043887809251365e1433a8fef879d23513"
	I1123 10:14:57.762893  537226 cri.go:89] found id: "872344c350f1c0db76811cd62d9d7adaa803f3c0d3efcaf1a806e4f1fc4df822"
	I1123 10:14:57.762896  537226 cri.go:89] found id: ""
	I1123 10:14:57.762944  537226 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:14:57.780969  537226 retry.go:31] will retry after 306.769027ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:14:57Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:14:58.088522  537226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:14:58.110987  537226 pause.go:52] kubelet running: false
	I1123 10:14:58.111052  537226 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:14:58.295777  537226 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:14:58.295882  537226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:14:58.404836  537226 cri.go:89] found id: "87de444d24b2febb76dda7b50f414db46d89c0d8fc63cbf46209d99a0e01672d"
	I1123 10:14:58.404859  537226 cri.go:89] found id: "97276357e27cf30604562b859301a5b21e5e2d2302ad432fc575ea7916ac030f"
	I1123 10:14:58.404865  537226 cri.go:89] found id: "89a95d5fe671d4bfa2f423fba99ec7d957ff464f9e3b91c5863c5d7913e94d04"
	I1123 10:14:58.404869  537226 cri.go:89] found id: "fab3c1340bb9fc1913f94cded9a1f0fba5136e42c2593fb7823cb21f94b031c1"
	I1123 10:14:58.404872  537226 cri.go:89] found id: "60df752bda6db8b92e0f147182a3bba7647274349456a921663b7a71421bb064"
	I1123 10:14:58.404876  537226 cri.go:89] found id: "42cc19608c6e58ebf338dc82a991b4cd9902c09d76a2fc3ad1709fb98fe71f1c"
	I1123 10:14:58.404879  537226 cri.go:89] found id: "f6adced2438dde36562063e35389aaa6f93406583a489e9200e01abeac6d2ba2"
	I1123 10:14:58.404883  537226 cri.go:89] found id: "49080a105e3a1028d971c78fae51a027ca689e779aae2b400ed02b743c540042"
	I1123 10:14:58.404886  537226 cri.go:89] found id: "fe2851bd5d0e209023685855c54c561683dab32a8f4e2ac4aad2e94044d6da28"
	I1123 10:14:58.404893  537226 cri.go:89] found id: "991ccbc0c6f8554770578bb5b28255043887809251365e1433a8fef879d23513"
	I1123 10:14:58.404896  537226 cri.go:89] found id: "872344c350f1c0db76811cd62d9d7adaa803f3c0d3efcaf1a806e4f1fc4df822"
	I1123 10:14:58.404900  537226 cri.go:89] found id: ""
	I1123 10:14:58.404947  537226 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:14:58.443897  537226 retry.go:31] will retry after 497.457477ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:14:58Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:14:58.941577  537226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:14:58.963817  537226 pause.go:52] kubelet running: false
	I1123 10:14:58.963878  537226 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 10:14:59.160064  537226 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 10:14:59.160138  537226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 10:14:59.251800  537226 cri.go:89] found id: "87de444d24b2febb76dda7b50f414db46d89c0d8fc63cbf46209d99a0e01672d"
	I1123 10:14:59.251818  537226 cri.go:89] found id: "97276357e27cf30604562b859301a5b21e5e2d2302ad432fc575ea7916ac030f"
	I1123 10:14:59.251823  537226 cri.go:89] found id: "89a95d5fe671d4bfa2f423fba99ec7d957ff464f9e3b91c5863c5d7913e94d04"
	I1123 10:14:59.251827  537226 cri.go:89] found id: "fab3c1340bb9fc1913f94cded9a1f0fba5136e42c2593fb7823cb21f94b031c1"
	I1123 10:14:59.251830  537226 cri.go:89] found id: "60df752bda6db8b92e0f147182a3bba7647274349456a921663b7a71421bb064"
	I1123 10:14:59.251834  537226 cri.go:89] found id: "42cc19608c6e58ebf338dc82a991b4cd9902c09d76a2fc3ad1709fb98fe71f1c"
	I1123 10:14:59.251837  537226 cri.go:89] found id: "f6adced2438dde36562063e35389aaa6f93406583a489e9200e01abeac6d2ba2"
	I1123 10:14:59.251840  537226 cri.go:89] found id: "49080a105e3a1028d971c78fae51a027ca689e779aae2b400ed02b743c540042"
	I1123 10:14:59.251844  537226 cri.go:89] found id: "fe2851bd5d0e209023685855c54c561683dab32a8f4e2ac4aad2e94044d6da28"
	I1123 10:14:59.251850  537226 cri.go:89] found id: "991ccbc0c6f8554770578bb5b28255043887809251365e1433a8fef879d23513"
	I1123 10:14:59.251853  537226 cri.go:89] found id: "872344c350f1c0db76811cd62d9d7adaa803f3c0d3efcaf1a806e4f1fc4df822"
	I1123 10:14:59.251856  537226 cri.go:89] found id: ""
	I1123 10:14:59.251911  537226 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 10:14:59.276846  537226 out.go:203] 
	W1123 10:14:59.282678  537226 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:14:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:14:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 10:14:59.282702  537226 out.go:285] * 
	* 
	W1123 10:14:59.291499  537226 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 10:14:59.298343  537226 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-330197 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-330197
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-330197:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c",
	        "Created": "2025-11-23T10:12:08.256335726Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 529566,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:13:48.624470776Z",
	            "FinishedAt": "2025-11-23T10:13:47.505247688Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/hostname",
	        "HostsPath": "/var/lib/docker/containers/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/hosts",
	        "LogPath": "/var/lib/docker/containers/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c-json.log",
	        "Name": "/default-k8s-diff-port-330197",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-330197:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-330197",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c",
	                "LowerDir": "/var/lib/docker/overlay2/3f48485d51ecbb271eab092e267a4905e984900c5592bc0d63966db4bfd4a0c4-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3f48485d51ecbb271eab092e267a4905e984900c5592bc0d63966db4bfd4a0c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3f48485d51ecbb271eab092e267a4905e984900c5592bc0d63966db4bfd4a0c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3f48485d51ecbb271eab092e267a4905e984900c5592bc0d63966db4bfd4a0c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-330197",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-330197/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-330197",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-330197",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-330197",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "91454532723a66b4aa8f586d2ab4a13260c6f0225e6cd8510a3174f39762d934",
	            "SandboxKey": "/var/run/docker/netns/91454532723a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33500"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-330197": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:1b:c5:58:e1:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "648b049bc86ff8eff41f306c615c5a3664920d5b8756357da481331ccc4f062a",
	                    "EndpointID": "4fc785b021d07b9cc23d602f24866619da430e72c9471b11544de250de2baa49",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-330197",
	                        "001c54c15317"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-330197 -n default-k8s-diff-port-330197
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-330197 -n default-k8s-diff-port-330197: exit status 2 (432.160633ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-330197 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-330197 logs -n 25: (1.667467645s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-097888                                                                                                                                                                                                               │ disable-driver-mounts-097888 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-566990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │                     │
	│ stop    │ -p embed-certs-566990 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ addons  │ enable dashboard -p embed-certs-566990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-330197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-330197 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ image   │ embed-certs-566990 image list --format=json                                                                                                                                                                                                   │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ pause   │ -p embed-certs-566990 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ delete  │ -p embed-certs-566990                                                                                                                                                                                                                         │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ delete  │ -p embed-certs-566990                                                                                                                                                                                                                         │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ start   │ -p newest-cni-499584 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-330197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-499584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │                     │
	│ stop    │ -p newest-cni-499584 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-499584 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ start   │ -p newest-cni-499584 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ image   │ newest-cni-499584 image list --format=json                                                                                                                                                                                                    │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ pause   │ -p newest-cni-499584 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │                     │
	│ image   │ default-k8s-diff-port-330197 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ pause   │ -p default-k8s-diff-port-330197 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │                     │
	│ delete  │ -p newest-cni-499584                                                                                                                                                                                                                          │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ delete  │ -p newest-cni-499584                                                                                                                                                                                                                          │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:14:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:14:35.308429  534325 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:14:35.308606  534325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:14:35.308636  534325 out.go:374] Setting ErrFile to fd 2...
	I1123 10:14:35.308660  534325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:14:35.308962  534325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:14:35.309373  534325 out.go:368] Setting JSON to false
	I1123 10:14:35.310426  534325 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10624,"bootTime":1763882251,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:14:35.310533  534325 start.go:143] virtualization:  
	I1123 10:14:35.314130  534325 out.go:179] * [newest-cni-499584] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:14:35.318169  534325 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:14:35.318306  534325 notify.go:221] Checking for updates...
	I1123 10:14:35.324604  534325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:14:35.327708  534325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:14:35.330662  534325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:14:35.333670  534325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:14:35.336633  534325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:14:35.340028  534325 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:35.340614  534325 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:14:35.371141  534325 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:14:35.371271  534325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:14:35.436934  534325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:14:35.426581673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:14:35.437078  534325 docker.go:319] overlay module found
	I1123 10:14:35.440404  534325 out.go:179] * Using the docker driver based on existing profile
	I1123 10:14:35.443223  534325 start.go:309] selected driver: docker
	I1123 10:14:35.443245  534325 start.go:927] validating driver "docker" against &{Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:14:35.443371  534325 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:14:35.444083  534325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:14:35.497494  534325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:14:35.4876245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:14:35.497858  534325 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:14:35.497886  534325 cni.go:84] Creating CNI manager for ""
	I1123 10:14:35.497946  534325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:14:35.497992  534325 start.go:353] cluster config:
	{Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:14:35.503042  534325 out.go:179] * Starting "newest-cni-499584" primary control-plane node in "newest-cni-499584" cluster
	I1123 10:14:35.505793  534325 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:14:35.508757  534325 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:14:35.511894  534325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:14:35.511952  534325 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:14:35.511963  534325 cache.go:65] Caching tarball of preloaded images
	I1123 10:14:35.512065  534325 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:14:35.512076  534325 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:14:35.512186  534325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/config.json ...
	I1123 10:14:35.512284  534325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:14:35.537457  534325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:14:35.537482  534325 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:14:35.537505  534325 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:14:35.537537  534325 start.go:360] acquireMachinesLock for newest-cni-499584: {Name:mk060761daeb1a62836bf24a9b9e867393b1f580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:14:35.537611  534325 start.go:364] duration metric: took 51.693µs to acquireMachinesLock for "newest-cni-499584"
	I1123 10:14:35.537632  534325 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:14:35.537637  534325 fix.go:54] fixHost starting: 
	I1123 10:14:35.537894  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:35.553302  534325 fix.go:112] recreateIfNeeded on newest-cni-499584: state=Stopped err=<nil>
	W1123 10:14:35.553334  534325 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 10:14:35.397704  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	W1123 10:14:37.895831  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	I1123 10:14:35.556606  534325 out.go:252] * Restarting existing docker container for "newest-cni-499584" ...
	I1123 10:14:35.556691  534325 cli_runner.go:164] Run: docker start newest-cni-499584
	I1123 10:14:35.810485  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:35.841678  534325 kic.go:430] container "newest-cni-499584" state is running.
	I1123 10:14:35.842092  534325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:14:35.865371  534325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/config.json ...
	I1123 10:14:35.865636  534325 machine.go:94] provisionDockerMachine start ...
	I1123 10:14:35.865701  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:35.888754  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:35.889100  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:35.889118  534325 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:14:35.889757  534325 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:14:39.045105  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-499584
	
	I1123 10:14:39.045131  534325 ubuntu.go:182] provisioning hostname "newest-cni-499584"
	I1123 10:14:39.045248  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.063953  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:39.064263  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:39.064279  534325 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-499584 && echo "newest-cni-499584" | sudo tee /etc/hostname
	I1123 10:14:39.227033  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-499584
	
	I1123 10:14:39.227156  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.245578  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:39.245894  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:39.245917  534325 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-499584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-499584/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-499584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:14:39.399001  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:14:39.399024  534325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:14:39.399055  534325 ubuntu.go:190] setting up certificates
	I1123 10:14:39.399073  534325 provision.go:84] configureAuth start
	I1123 10:14:39.399131  534325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:14:39.416579  534325 provision.go:143] copyHostCerts
	I1123 10:14:39.416662  534325 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:14:39.416680  534325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:14:39.416761  534325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:14:39.416866  534325 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:14:39.416870  534325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:14:39.416899  534325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:14:39.416961  534325 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:14:39.416967  534325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:14:39.416990  534325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:14:39.417043  534325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.newest-cni-499584 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-499584]
	I1123 10:14:39.627598  534325 provision.go:177] copyRemoteCerts
	I1123 10:14:39.627689  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:14:39.627763  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.647970  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:39.757042  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:14:39.774686  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:14:39.794086  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:14:39.812248  534325 provision.go:87] duration metric: took 413.152366ms to configureAuth
	I1123 10:14:39.812274  534325 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:14:39.812473  534325 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:39.812587  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.831627  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:39.831936  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:39.831959  534325 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:14:40.202065  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:14:40.202163  534325 machine.go:97] duration metric: took 4.336516564s to provisionDockerMachine
	I1123 10:14:40.202198  534325 start.go:293] postStartSetup for "newest-cni-499584" (driver="docker")
	I1123 10:14:40.202228  534325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:14:40.202328  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:14:40.202388  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.221920  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.329811  534325 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:14:40.333732  534325 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:14:40.333763  534325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:14:40.333774  534325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:14:40.333829  534325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:14:40.333908  534325 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:14:40.334018  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:14:40.341710  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:14:40.360854  534325 start.go:296] duration metric: took 158.623442ms for postStartSetup
	I1123 10:14:40.360956  534325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:14:40.361017  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.378625  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.482834  534325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:14:40.488027  534325 fix.go:56] duration metric: took 4.950382019s for fixHost
	I1123 10:14:40.488055  534325 start.go:83] releasing machines lock for "newest-cni-499584", held for 4.950434147s
	I1123 10:14:40.488126  534325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:14:40.507445  534325 ssh_runner.go:195] Run: cat /version.json
	I1123 10:14:40.507515  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.507781  534325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:14:40.507851  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.526536  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.540744  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.633331  534325 ssh_runner.go:195] Run: systemctl --version
	I1123 10:14:40.735956  534325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:14:40.771974  534325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:14:40.776327  534325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:14:40.776407  534325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:14:40.784473  534325 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:14:40.784497  534325 start.go:496] detecting cgroup driver to use...
	I1123 10:14:40.784529  534325 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:14:40.784595  534325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:14:40.802198  534325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:14:40.815436  534325 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:14:40.815516  534325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:14:40.833773  534325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:14:40.847175  534325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:14:40.964031  534325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:14:41.079493  534325 docker.go:234] disabling docker service ...
	I1123 10:14:41.079592  534325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:14:41.099078  534325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:14:41.126997  534325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:14:41.246640  534325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:14:41.357516  534325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:14:41.371610  534325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:14:41.386018  534325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:14:41.386151  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.400270  534325 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:14:41.400380  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.410282  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.419540  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.429966  534325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:14:41.442207  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.452481  534325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.462238  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.472504  534325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:14:41.480544  534325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:14:41.488228  534325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:41.629522  534325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:14:41.808644  534325 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:14:41.808710  534325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:14:41.812474  534325 start.go:564] Will wait 60s for crictl version
	I1123 10:14:41.812551  534325 ssh_runner.go:195] Run: which crictl
	I1123 10:14:41.816298  534325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:14:41.846825  534325 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:14:41.846917  534325 ssh_runner.go:195] Run: crio --version
	I1123 10:14:41.875254  534325 ssh_runner.go:195] Run: crio --version
	I1123 10:14:41.910420  534325 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:14:41.913261  534325 cli_runner.go:164] Run: docker network inspect newest-cni-499584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:14:41.928711  534325 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:14:41.932661  534325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:14:41.945351  534325 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1123 10:14:39.896516  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	W1123 10:14:41.896853  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	I1123 10:14:41.948184  534325 kubeadm.go:884] updating cluster {Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:14:41.948333  534325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:14:41.948404  534325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:14:41.983037  534325 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:14:41.983059  534325 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:14:41.983122  534325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:14:42.012647  534325 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:14:42.012670  534325 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:14:42.012684  534325 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:14:42.012801  534325 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-499584 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:14:42.012913  534325 ssh_runner.go:195] Run: crio config
	I1123 10:14:42.085762  534325 cni.go:84] Creating CNI manager for ""
	I1123 10:14:42.085845  534325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:14:42.085887  534325 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 10:14:42.085940  534325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-499584 NodeName:newest-cni-499584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:14:42.086144  534325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-499584"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:14:42.086272  534325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:14:42.099771  534325 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:14:42.099872  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:14:42.109476  534325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:14:42.125982  534325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:14:42.143068  534325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1123 10:14:42.162444  534325 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:14:42.167161  534325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:14:42.179960  534325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:42.317087  534325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:14:42.336114  534325 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584 for IP: 192.168.76.2
	I1123 10:14:42.336139  534325 certs.go:195] generating shared ca certs ...
	I1123 10:14:42.336157  534325 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:42.336301  534325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:14:42.336359  534325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:14:42.336372  534325 certs.go:257] generating profile certs ...
	I1123 10:14:42.336466  534325 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/client.key
	I1123 10:14:42.336546  534325 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key.22d7de13
	I1123 10:14:42.336598  534325 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.key
	I1123 10:14:42.336725  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:14:42.336762  534325 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:14:42.336780  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:14:42.336809  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:14:42.336841  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:14:42.336874  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:14:42.336925  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:14:42.337678  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:14:42.363407  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:14:42.382696  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:14:42.404059  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:14:42.425457  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:14:42.449626  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:14:42.473741  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:14:42.503125  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:14:42.539613  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:14:42.564062  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:14:42.584177  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:14:42.606938  534325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:14:42.620683  534325 ssh_runner.go:195] Run: openssl version
	I1123 10:14:42.627518  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:14:42.636658  534325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:14:42.640680  534325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:14:42.640792  534325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:14:42.685708  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:14:42.696740  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:14:42.705991  534325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:14:42.709884  534325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:14:42.709962  534325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:14:42.750861  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:14:42.759122  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:14:42.767889  534325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:42.772006  534325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:42.772075  534325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:42.814113  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:14:42.823220  534325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:14:42.827234  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:14:42.869026  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:14:42.913588  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:14:42.969491  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:14:43.021834  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:14:43.089424  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:14:43.174105  534325 kubeadm.go:401] StartCluster: {Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:14:43.174258  534325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:14:43.174371  534325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:14:43.253263  534325 cri.go:89] found id: "55951d0d04b4f47b2d5b5cf62ccc211475fb562db7126dbfd7b727861257eac0"
	I1123 10:14:43.253335  534325 cri.go:89] found id: "29bf860a34581ef12a5c2e695cb5c4f9bee91e4dfc153fd656e57d8c48fa1f90"
	I1123 10:14:43.253355  534325 cri.go:89] found id: "5ba284305a4abd603bf7200240b618f02ce262119d49e43b0da6cf7313bbc7be"
	I1123 10:14:43.253381  534325 cri.go:89] found id: "7ef630ea40b951127d767ee2e09ebb4700a9b36e54474665707cf2be5860d032"
	I1123 10:14:43.253424  534325 cri.go:89] found id: ""
	I1123 10:14:43.253525  534325 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:14:43.275277  534325 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:14:43Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:14:43.275413  534325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:14:43.288015  534325 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:14:43.288090  534325 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:14:43.288184  534325 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:14:43.300089  534325 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:14:43.300769  534325 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-499584" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:14:43.301095  534325 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-499584" cluster setting kubeconfig missing "newest-cni-499584" context setting]
	I1123 10:14:43.301637  534325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:43.303525  534325 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:14:43.315357  534325 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:14:43.315438  534325 kubeadm.go:602] duration metric: took 27.319463ms to restartPrimaryControlPlane
	I1123 10:14:43.315462  534325 kubeadm.go:403] duration metric: took 141.368104ms to StartCluster
	I1123 10:14:43.315508  534325 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:43.315602  534325 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:14:43.316666  534325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:43.316959  534325 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:14:43.317718  534325 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:43.317697  534325 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:14:43.317796  534325 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-499584"
	I1123 10:14:43.317809  534325 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-499584"
	W1123 10:14:43.317818  534325 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:14:43.317823  534325 addons.go:70] Setting dashboard=true in profile "newest-cni-499584"
	I1123 10:14:43.317850  534325 addons.go:70] Setting default-storageclass=true in profile "newest-cni-499584"
	I1123 10:14:43.317863  534325 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-499584"
	I1123 10:14:43.317855  534325 addons.go:239] Setting addon dashboard=true in "newest-cni-499584"
	W1123 10:14:43.317900  534325 addons.go:248] addon dashboard should already be in state true
	I1123 10:14:43.317929  534325 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:43.318191  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.317844  534325 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:43.319197  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.319350  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.322853  534325 out.go:179] * Verifying Kubernetes components...
	I1123 10:14:43.326101  534325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:43.365483  534325 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:14:43.375618  534325 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:14:43.380627  534325 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:14:43.413785  529379 pod_ready.go:94] pod "coredns-66bc5c9577-pphv6" is "Ready"
	I1123 10:14:43.413811  529379 pod_ready.go:86] duration metric: took 36.023476036s for pod "coredns-66bc5c9577-pphv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.423051  529379 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.472047  529379 pod_ready.go:94] pod "etcd-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:43.472071  529379 pod_ready.go:86] duration metric: took 48.99566ms for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.480493  529379 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.497900  529379 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:43.497926  529379 pod_ready.go:86] duration metric: took 17.40953ms for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.501812  529379 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.594337  529379 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:43.594360  529379 pod_ready.go:86] duration metric: took 92.526931ms for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.793574  529379 pod_ready.go:83] waiting for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.193846  529379 pod_ready.go:94] pod "kube-proxy-75qqt" is "Ready"
	I1123 10:14:44.193870  529379 pod_ready.go:86] duration metric: took 400.271598ms for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.394519  529379 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.794600  529379 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:44.794624  529379 pod_ready.go:86] duration metric: took 400.080817ms for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.794638  529379 pod_ready.go:40] duration metric: took 37.408453068s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:14:44.888790  529379 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:14:44.892050  529379 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-330197" cluster and "default" namespace by default
	I1123 10:14:43.380644  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:14:43.380714  534325 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:14:43.380780  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:43.384089  534325 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:14:43.384112  534325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:14:43.384176  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:43.393141  534325 addons.go:239] Setting addon default-storageclass=true in "newest-cni-499584"
	W1123 10:14:43.393166  534325 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:14:43.393191  534325 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:43.393629  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.426018  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:43.454818  534325 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:14:43.454840  534325 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:14:43.454905  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:43.477805  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:43.499014  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:43.684857  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:14:43.684931  534325 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:14:43.710152  534325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:14:43.742808  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:14:43.742881  534325 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:14:43.748309  534325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:14:43.767832  534325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:14:43.777593  534325 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:14:43.777753  534325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:14:43.819844  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:14:43.819915  534325 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:14:43.906458  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:14:43.906530  534325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:14:43.961147  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:14:43.961219  534325 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:14:44.045109  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:14:44.045187  534325 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:14:44.085266  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:14:44.085342  534325 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:14:44.122397  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:14:44.122478  534325 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:14:44.149167  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:14:44.149243  534325 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:14:44.176014  534325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:14:49.874831  534325 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.126442363s)
	I1123 10:14:49.874895  534325 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.106997388s)
	I1123 10:14:49.875220  534325 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.097427038s)
	I1123 10:14:49.875249  534325 api_server.go:72] duration metric: took 6.558232564s to wait for apiserver process to appear ...
	I1123 10:14:49.875256  534325 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:14:49.875268  534325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:14:49.875552  534325 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.699461585s)
	I1123 10:14:49.878710  534325 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-499584 addons enable metrics-server
	
	I1123 10:14:49.899521  534325 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:14:49.901259  534325 api_server.go:141] control plane version: v1.34.1
	I1123 10:14:49.901287  534325 api_server.go:131] duration metric: took 26.024714ms to wait for apiserver health ...
	I1123 10:14:49.901296  534325 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:14:49.907916  534325 system_pods.go:59] 8 kube-system pods found
	I1123 10:14:49.907956  534325 system_pods.go:61] "coredns-66bc5c9577-gpv4n" [3ac78ff6-250d-4ce6-ba6f-913ba5a46be8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:14:49.907965  534325 system_pods.go:61] "etcd-newest-cni-499584" [fbc5fde9-9d75-41ee-a27e-bea9e43c5c1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:14:49.907971  534325 system_pods.go:61] "kindnet-8pwmm" [3933503c-90da-4b79-98e7-e4a22d58813d] Running
	I1123 10:14:49.907978  534325 system_pods.go:61] "kube-apiserver-newest-cni-499584" [2a4c121c-305b-4eef-8b3a-127a1fef8812] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:14:49.907985  534325 system_pods.go:61] "kube-controller-manager-newest-cni-499584" [c00e062c-870f-4ed7-a05d-615fc6c7d81d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:14:49.907989  534325 system_pods.go:61] "kube-proxy-7ccmv" [8dace15f-cf56-4d36-9840-ceb07d85b8b0] Running
	I1123 10:14:49.907995  534325 system_pods.go:61] "kube-scheduler-newest-cni-499584" [94684fe3-8d3e-4f48-9dad-6f0c6414f3c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:14:49.908028  534325 system_pods.go:61] "storage-provisioner" [70f72df9-2a87-468c-9f4c-2df81d587a29] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:14:49.908042  534325 system_pods.go:74] duration metric: took 6.740578ms to wait for pod list to return data ...
	I1123 10:14:49.908051  534325 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:14:49.908970  534325 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 10:14:49.911793  534325 addons.go:530] duration metric: took 6.594095134s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:14:49.916336  534325 default_sa.go:45] found service account: "default"
	I1123 10:14:49.916411  534325 default_sa.go:55] duration metric: took 8.348899ms for default service account to be created ...
	I1123 10:14:49.916442  534325 kubeadm.go:587] duration metric: took 6.59942349s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:14:49.916485  534325 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:14:49.919381  534325 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:14:49.919471  534325 node_conditions.go:123] node cpu capacity is 2
	I1123 10:14:49.919500  534325 node_conditions.go:105] duration metric: took 2.99226ms to run NodePressure ...
	I1123 10:14:49.919527  534325 start.go:242] waiting for startup goroutines ...
	I1123 10:14:49.919552  534325 start.go:247] waiting for cluster config update ...
	I1123 10:14:49.919581  534325 start.go:256] writing updated cluster config ...
	I1123 10:14:49.919880  534325 ssh_runner.go:195] Run: rm -f paused
	I1123 10:14:50.006509  534325 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:14:50.010080  534325 out.go:179] * Done! kubectl is now configured to use "newest-cni-499584" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.096562018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.116516359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.117299108Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.139068953Z" level=info msg="Created container 991ccbc0c6f8554770578bb5b28255043887809251365e1433a8fef879d23513: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82/dashboard-metrics-scraper" id=c91d5bb8-fe55-488a-97b6-10cc61c2637a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.144277158Z" level=info msg="Starting container: 991ccbc0c6f8554770578bb5b28255043887809251365e1433a8fef879d23513" id=cfceedd1-784c-4b3a-8ff2-1b965a286229 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.148054884Z" level=info msg="Started container" PID=1646 containerID=991ccbc0c6f8554770578bb5b28255043887809251365e1433a8fef879d23513 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82/dashboard-metrics-scraper id=cfceedd1-784c-4b3a-8ff2-1b965a286229 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c6f0b3efbeae5e54329a1d042597ae5acd1e34f439dd95128728d63d5022d59c
	Nov 23 10:14:41 default-k8s-diff-port-330197 conmon[1644]: conmon 991ccbc0c6f855477057 <ninfo>: container 1646 exited with status 1
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.538052584Z" level=info msg="Removing container: f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e" id=69c24e5a-6d7d-4069-8697-a9b0c0fa0e37 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.545661861Z" level=info msg="Error loading conmon cgroup of container f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e: cgroup deleted" id=69c24e5a-6d7d-4069-8697-a9b0c0fa0e37 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.548700358Z" level=info msg="Removed container f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82/dashboard-metrics-scraper" id=69c24e5a-6d7d-4069-8697-a9b0c0fa0e37 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.417900769Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.422173066Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.422332248Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.422415301Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.429344223Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.42937674Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.429397344Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.432753688Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.432894761Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.432980653Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.438954983Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.4391178Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.439196275Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.445257532Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.445293069Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	991ccbc0c6f85       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago       Exited              dashboard-metrics-scraper   2                   c6f0b3efbeae5       dashboard-metrics-scraper-6ffb444bf9-zzt82             kubernetes-dashboard
	87de444d24b2f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   1490e7d0cba3f       storage-provisioner                                    kube-system
	872344c350f1c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   93396b70f5a0c       kubernetes-dashboard-855c9754f9-8wqtw                  kubernetes-dashboard
	97276357e27cf       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   ebc85df750f0b       coredns-66bc5c9577-pphv6                               kube-system
	0fc5d9f48de90       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   c566caa8cf467       busybox                                                default
	89a95d5fe671d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   1490e7d0cba3f       storage-provisioner                                    kube-system
	fab3c1340bb9f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   96b0a3f83b264       kindnet-wfv8n                                          kube-system
	60df752bda6db       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   51ac7c37980e8       kube-proxy-75qqt                                       kube-system
	42cc19608c6e5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   54f8f350c1a75       kube-controller-manager-default-k8s-diff-port-330197   kube-system
	f6adced2438dd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   7fc2bc5f7893d       kube-scheduler-default-k8s-diff-port-330197            kube-system
	49080a105e3a1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   4ac440c0b42f1       kube-apiserver-default-k8s-diff-port-330197            kube-system
	fe2851bd5d0e2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   764501f445bb1       etcd-default-k8s-diff-port-330197                      kube-system
	
	
	==> coredns [97276357e27cf30604562b859301a5b21e5e2d2302ad432fc575ea7916ac030f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35438 - 6294 "HINFO IN 4520195747009942529.4274652798983577699. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042713204s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-330197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-330197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=default-k8s-diff-port-330197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_12_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:12:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-330197
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:14:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:14:44 +0000   Sun, 23 Nov 2025 10:12:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:14:44 +0000   Sun, 23 Nov 2025 10:12:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:14:44 +0000   Sun, 23 Nov 2025 10:12:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:14:44 +0000   Sun, 23 Nov 2025 10:13:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-330197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                9fb10197-c662-4288-a6e4-d39f9ec1d57e
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-pphv6                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-330197                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-wfv8n                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-330197             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-330197    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-75qqt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-default-k8s-diff-port-330197             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zzt82              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8wqtw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Normal   Starting                 2m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-330197 event: Registered Node default-k8s-diff-port-330197 in Controller
	  Normal   NodeReady                99s                    kubelet          Node default-k8s-diff-port-330197 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node default-k8s-diff-port-330197 event: Registered Node default-k8s-diff-port-330197 in Controller
	
	
	==> dmesg <==
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	[Nov23 10:12] overlayfs: idmapped layers are currently not supported
	[ +16.378417] overlayfs: idmapped layers are currently not supported
	[Nov23 10:13] overlayfs: idmapped layers are currently not supported
	[Nov23 10:14] overlayfs: idmapped layers are currently not supported
	[ +29.685025] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fe2851bd5d0e209023685855c54c561683dab32a8f4e2ac4aad2e94044d6da28] <==
	{"level":"warn","ts":"2025-11-23T10:14:01.676462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.699141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.720708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.738710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.790490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.803302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.822315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.845454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.862222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.876893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.906337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.939714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.971967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.042615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.043767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.061760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.079296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.106512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.125904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.140077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.239767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58586","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T10:14:05.826506Z","caller":"traceutil/trace.go:172","msg":"trace[1747657151] linearizableReadLoop","detail":"{readStateIndex:570; appliedIndex:570; }","duration":"120.628678ms","start":"2025-11-23T10:14:05.705857Z","end":"2025-11-23T10:14:05.826486Z","steps":["trace[1747657151] 'read index received'  (duration: 120.622648ms)","trace[1747657151] 'applied index is now lower than readState.Index'  (duration: 5.194µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T10:14:05.827292Z","caller":"traceutil/trace.go:172","msg":"trace[1733730109] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"133.416566ms","start":"2025-11-23T10:14:05.693858Z","end":"2025-11-23T10:14:05.827274Z","steps":["trace[1733730109] 'process raft request'  (duration: 132.949153ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T10:14:05.829049Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.121427ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:replication-controller\" limit:1 ","response":"range_response_count:1 size:763"}
	{"level":"info","ts":"2025-11-23T10:14:05.829124Z","caller":"traceutil/trace.go:172","msg":"trace[211356073] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:replication-controller; range_end:; response_count:1; response_revision:540; }","duration":"123.236622ms","start":"2025-11-23T10:14:05.705852Z","end":"2025-11-23T10:14:05.829089Z","steps":["trace[211356073] 'agreement among raft nodes before linearized reading'  (duration: 120.693557ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:15:01 up  2:57,  0 user,  load average: 6.00, 5.00, 3.85
	Linux default-k8s-diff-port-330197 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fab3c1340bb9fc1913f94cded9a1f0fba5136e42c2593fb7823cb21f94b031c1] <==
	I1123 10:14:06.026199       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:14:06.050885       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 10:14:06.051035       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:14:06.051048       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:14:06.051064       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:14:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:14:06.417783       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:14:06.417798       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:14:06.417806       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:14:06.418897       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:14:36.418905       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 10:14:36.418911       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 10:14:36.418994       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:14:36.419027       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 10:14:37.817941       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:14:37.818079       1 metrics.go:72] Registering metrics
	I1123 10:14:37.818186       1 controller.go:711] "Syncing nftables rules"
	I1123 10:14:46.417485       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:14:46.417630       1 main.go:301] handling current node
	I1123 10:14:56.425506       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:14:56.425541       1 main.go:301] handling current node
	
	
	==> kube-apiserver [49080a105e3a1028d971c78fae51a027ca689e779aae2b400ed02b743c540042] <==
	I1123 10:14:03.560384       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 10:14:03.560450       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 10:14:03.561015       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:14:03.564987       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 10:14:03.565213       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:14:03.682200       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 10:14:03.682224       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 10:14:03.682391       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 10:14:03.682637       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 10:14:03.707802       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 10:14:03.707842       1 policy_source.go:240] refreshing policies
	I1123 10:14:03.723572       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:14:03.776632       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1123 10:14:03.881784       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 10:14:04.094512       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:14:04.241328       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:14:06.296137       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:14:06.341462       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:14:06.429554       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:14:06.464830       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:14:06.839325       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.174.153"}
	I1123 10:14:06.918484       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.7.70"}
	I1123 10:14:08.182839       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:14:08.234638       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:14:08.281707       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [42cc19608c6e58ebf338dc82a991b4cd9902c09d76a2fc3ad1709fb98fe71f1c] <==
	I1123 10:14:07.867193       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:14:07.867228       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:14:07.873819       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:14:07.873869       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:14:07.874022       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 10:14:07.876734       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:14:07.878629       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:14:07.881456       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 10:14:07.881629       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 10:14:07.887357       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:14:07.888653       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:14:07.892065       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:14:07.892134       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:14:07.894083       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:14:07.894181       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:14:07.894265       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-330197"
	I1123 10:14:07.894331       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 10:14:07.894806       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:14:07.901492       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:14:07.901660       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:14:07.909523       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:14:07.929458       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:14:07.943231       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:14:07.943258       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 10:14:07.943267       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [60df752bda6db8b92e0f147182a3bba7647274349456a921663b7a71421bb064] <==
	I1123 10:14:06.195338       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:14:07.003392       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:14:07.203960       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:14:07.204005       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 10:14:07.204072       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:14:07.366349       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:14:07.373603       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:14:07.411298       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:14:07.411653       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:14:07.415106       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:14:07.416435       1 config.go:200] "Starting service config controller"
	I1123 10:14:07.416457       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:14:07.416473       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:14:07.416484       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:14:07.416505       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:14:07.416513       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:14:07.417151       1 config.go:309] "Starting node config controller"
	I1123 10:14:07.417168       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:14:07.417175       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:14:07.517105       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:14:07.517156       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:14:07.517194       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f6adced2438dde36562063e35389aaa6f93406583a489e9200e01abeac6d2ba2] <==
	I1123 10:13:58.346567       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:14:03.209654       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:14:03.209747       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:14:03.209781       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:14:03.209811       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:14:03.471403       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:14:03.471431       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:14:03.486371       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:14:03.486492       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:14:03.486509       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:14:03.486525       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:14:03.699990       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:14:08 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:08.598504     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxjtq\" (UniqueName: \"kubernetes.io/projected/90da865e-da95-477f-8c9d-e94af6db5c3b-kube-api-access-dxjtq\") pod \"dashboard-metrics-scraper-6ffb444bf9-zzt82\" (UID: \"90da865e-da95-477f-8c9d-e94af6db5c3b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82"
	Nov 23 10:14:08 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:08.598576     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6n7q\" (UniqueName: \"kubernetes.io/projected/eb24f5e7-c61d-442a-91a6-e5d5c11eb288-kube-api-access-s6n7q\") pod \"kubernetes-dashboard-855c9754f9-8wqtw\" (UID: \"eb24f5e7-c61d-442a-91a6-e5d5c11eb288\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8wqtw"
	Nov 23 10:14:08 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:08.598599     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eb24f5e7-c61d-442a-91a6-e5d5c11eb288-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8wqtw\" (UID: \"eb24f5e7-c61d-442a-91a6-e5d5c11eb288\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8wqtw"
	Nov 23 10:14:08 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:08.598639     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/90da865e-da95-477f-8c9d-e94af6db5c3b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zzt82\" (UID: \"90da865e-da95-477f-8c9d-e94af6db5c3b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82"
	Nov 23 10:14:08 default-k8s-diff-port-330197 kubelet[781]: W1123 10:14:08.838854     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/crio-c6f0b3efbeae5e54329a1d042597ae5acd1e34f439dd95128728d63d5022d59c WatchSource:0}: Error finding container c6f0b3efbeae5e54329a1d042597ae5acd1e34f439dd95128728d63d5022d59c: Status 404 returned error can't find the container with id c6f0b3efbeae5e54329a1d042597ae5acd1e34f439dd95128728d63d5022d59c
	Nov 23 10:14:13 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:13.039860     781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 10:14:23 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:23.473339     781 scope.go:117] "RemoveContainer" containerID="6e223668723aafd868ca4d75a4713421daa150df1acde72152872a23a87d1dc9"
	Nov 23 10:14:23 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:23.500306     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8wqtw" podStartSLOduration=8.305166818 podStartE2EDuration="15.500288071s" podCreationTimestamp="2025-11-23 10:14:08 +0000 UTC" firstStartedPulling="2025-11-23 10:14:08.814413958 +0000 UTC m=+12.872608349" lastFinishedPulling="2025-11-23 10:14:16.009535211 +0000 UTC m=+20.067729602" observedRunningTime="2025-11-23 10:14:16.47704903 +0000 UTC m=+20.535243429" watchObservedRunningTime="2025-11-23 10:14:23.500288071 +0000 UTC m=+27.558482470"
	Nov 23 10:14:24 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:24.477072     781 scope.go:117] "RemoveContainer" containerID="6e223668723aafd868ca4d75a4713421daa150df1acde72152872a23a87d1dc9"
	Nov 23 10:14:24 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:24.477839     781 scope.go:117] "RemoveContainer" containerID="f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e"
	Nov 23 10:14:24 default-k8s-diff-port-330197 kubelet[781]: E1123 10:14:24.478111     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zzt82_kubernetes-dashboard(90da865e-da95-477f-8c9d-e94af6db5c3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82" podUID="90da865e-da95-477f-8c9d-e94af6db5c3b"
	Nov 23 10:14:25 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:25.480968     781 scope.go:117] "RemoveContainer" containerID="f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e"
	Nov 23 10:14:25 default-k8s-diff-port-330197 kubelet[781]: E1123 10:14:25.481144     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zzt82_kubernetes-dashboard(90da865e-da95-477f-8c9d-e94af6db5c3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82" podUID="90da865e-da95-477f-8c9d-e94af6db5c3b"
	Nov 23 10:14:28 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:28.796752     781 scope.go:117] "RemoveContainer" containerID="f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e"
	Nov 23 10:14:28 default-k8s-diff-port-330197 kubelet[781]: E1123 10:14:28.796931     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zzt82_kubernetes-dashboard(90da865e-da95-477f-8c9d-e94af6db5c3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82" podUID="90da865e-da95-477f-8c9d-e94af6db5c3b"
	Nov 23 10:14:36 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:36.516238     781 scope.go:117] "RemoveContainer" containerID="89a95d5fe671d4bfa2f423fba99ec7d957ff464f9e3b91c5863c5d7913e94d04"
	Nov 23 10:14:41 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:41.093783     781 scope.go:117] "RemoveContainer" containerID="f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e"
	Nov 23 10:14:41 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:41.532246     781 scope.go:117] "RemoveContainer" containerID="f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e"
	Nov 23 10:14:41 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:41.533964     781 scope.go:117] "RemoveContainer" containerID="991ccbc0c6f8554770578bb5b28255043887809251365e1433a8fef879d23513"
	Nov 23 10:14:41 default-k8s-diff-port-330197 kubelet[781]: E1123 10:14:41.534276     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zzt82_kubernetes-dashboard(90da865e-da95-477f-8c9d-e94af6db5c3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82" podUID="90da865e-da95-477f-8c9d-e94af6db5c3b"
	Nov 23 10:14:48 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:48.796583     781 scope.go:117] "RemoveContainer" containerID="991ccbc0c6f8554770578bb5b28255043887809251365e1433a8fef879d23513"
	Nov 23 10:14:48 default-k8s-diff-port-330197 kubelet[781]: E1123 10:14:48.797819     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zzt82_kubernetes-dashboard(90da865e-da95-477f-8c9d-e94af6db5c3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82" podUID="90da865e-da95-477f-8c9d-e94af6db5c3b"
	Nov 23 10:14:57 default-k8s-diff-port-330197 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:14:57 default-k8s-diff-port-330197 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:14:57 default-k8s-diff-port-330197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [872344c350f1c0db76811cd62d9d7adaa803f3c0d3efcaf1a806e4f1fc4df822] <==
	2025/11/23 10:14:16 Using namespace: kubernetes-dashboard
	2025/11/23 10:14:16 Using in-cluster config to connect to apiserver
	2025/11/23 10:14:16 Using secret token for csrf signing
	2025/11/23 10:14:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:14:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:14:16 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 10:14:16 Generating JWE encryption key
	2025/11/23 10:14:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:14:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:14:17 Initializing JWE encryption key from synchronized object
	2025/11/23 10:14:17 Creating in-cluster Sidecar client
	2025/11/23 10:14:17 Serving insecurely on HTTP port: 9090
	2025/11/23 10:14:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:14:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:14:16 Starting overwatch
	
	
	==> storage-provisioner [87de444d24b2febb76dda7b50f414db46d89c0d8fc63cbf46209d99a0e01672d] <==
	I1123 10:14:36.569673       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:14:36.583797       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:14:36.583854       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:14:36.586302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:40.042664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:44.303432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:47.902390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:50.957023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:53.979595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:53.984577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:14:53.984765       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:14:53.984967       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-330197_591ba971-05d3-40bc-bfbc-0ace58a0e4e6!
	I1123 10:14:53.985722       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c776180-63cd-4909-9a5b-31f492baafc6", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-330197_591ba971-05d3-40bc-bfbc-0ace58a0e4e6 became leader
	W1123 10:14:53.989100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:53.996203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:14:54.085567       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-330197_591ba971-05d3-40bc-bfbc-0ace58a0e4e6!
	W1123 10:14:55.999592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:56.006351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:58.010569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:58.019524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:15:00.023985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:15:00.048206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [89a95d5fe671d4bfa2f423fba99ec7d957ff464f9e3b91c5863c5d7913e94d04] <==
	I1123 10:14:06.283392       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:14:36.289675       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-330197 -n default-k8s-diff-port-330197
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-330197 -n default-k8s-diff-port-330197: exit status 2 (368.948445ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-330197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-330197
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-330197:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c",
	        "Created": "2025-11-23T10:12:08.256335726Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 529566,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:13:48.624470776Z",
	            "FinishedAt": "2025-11-23T10:13:47.505247688Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/hostname",
	        "HostsPath": "/var/lib/docker/containers/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/hosts",
	        "LogPath": "/var/lib/docker/containers/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c-json.log",
	        "Name": "/default-k8s-diff-port-330197",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-330197:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-330197",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c",
	                "LowerDir": "/var/lib/docker/overlay2/3f48485d51ecbb271eab092e267a4905e984900c5592bc0d63966db4bfd4a0c4-init/diff:/var/lib/docker/overlay2/22ccefb2112e452ccd498554867c9844443c2b156dc7e52debe9b79b4e52c2a7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3f48485d51ecbb271eab092e267a4905e984900c5592bc0d63966db4bfd4a0c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3f48485d51ecbb271eab092e267a4905e984900c5592bc0d63966db4bfd4a0c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3f48485d51ecbb271eab092e267a4905e984900c5592bc0d63966db4bfd4a0c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-330197",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-330197/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-330197",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-330197",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-330197",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "91454532723a66b4aa8f586d2ab4a13260c6f0225e6cd8510a3174f39762d934",
	            "SandboxKey": "/var/run/docker/netns/91454532723a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33500"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-330197": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:1b:c5:58:e1:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "648b049bc86ff8eff41f306c615c5a3664920d5b8756357da481331ccc4f062a",
	                    "EndpointID": "4fc785b021d07b9cc23d602f24866619da430e72c9471b11544de250de2baa49",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-330197",
	                        "001c54c15317"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-330197 -n default-k8s-diff-port-330197
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-330197 -n default-k8s-diff-port-330197: exit status 2 (410.308051ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-330197 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-330197 logs -n 25: (1.263710624s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-097888                                                                                                                                                                                                               │ disable-driver-mounts-097888 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-566990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │                     │
	│ stop    │ -p embed-certs-566990 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ addons  │ enable dashboard -p embed-certs-566990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:12 UTC │
	│ start   │ -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:12 UTC │ 23 Nov 25 10:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-330197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-330197 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ image   │ embed-certs-566990 image list --format=json                                                                                                                                                                                                   │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ pause   │ -p embed-certs-566990 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │                     │
	│ delete  │ -p embed-certs-566990                                                                                                                                                                                                                         │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ delete  │ -p embed-certs-566990                                                                                                                                                                                                                         │ embed-certs-566990           │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ start   │ -p newest-cni-499584 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-330197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:13 UTC │
	│ start   │ -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:13 UTC │ 23 Nov 25 10:14 UTC │
	│ addons  │ enable metrics-server -p newest-cni-499584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │                     │
	│ stop    │ -p newest-cni-499584 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ addons  │ enable dashboard -p newest-cni-499584 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ start   │ -p newest-cni-499584 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ image   │ newest-cni-499584 image list --format=json                                                                                                                                                                                                    │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ pause   │ -p newest-cni-499584 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │                     │
	│ image   │ default-k8s-diff-port-330197 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ pause   │ -p default-k8s-diff-port-330197 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-330197 │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │                     │
	│ delete  │ -p newest-cni-499584                                                                                                                                                                                                                          │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	│ delete  │ -p newest-cni-499584                                                                                                                                                                                                                          │ newest-cni-499584            │ jenkins │ v1.37.0 │ 23 Nov 25 10:14 UTC │ 23 Nov 25 10:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:14:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:14:35.308429  534325 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:14:35.308606  534325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:14:35.308636  534325 out.go:374] Setting ErrFile to fd 2...
	I1123 10:14:35.308660  534325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:14:35.308962  534325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 10:14:35.309373  534325 out.go:368] Setting JSON to false
	I1123 10:14:35.310426  534325 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10624,"bootTime":1763882251,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 10:14:35.310533  534325 start.go:143] virtualization:  
	I1123 10:14:35.314130  534325 out.go:179] * [newest-cni-499584] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:14:35.318169  534325 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 10:14:35.318306  534325 notify.go:221] Checking for updates...
	I1123 10:14:35.324604  534325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:14:35.327708  534325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:14:35.330662  534325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 10:14:35.333670  534325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:14:35.336633  534325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:14:35.340028  534325 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:35.340614  534325 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:14:35.371141  534325 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:14:35.371271  534325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:14:35.436934  534325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:14:35.426581673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:14:35.437078  534325 docker.go:319] overlay module found
	I1123 10:14:35.440404  534325 out.go:179] * Using the docker driver based on existing profile
	I1123 10:14:35.443223  534325 start.go:309] selected driver: docker
	I1123 10:14:35.443245  534325 start.go:927] validating driver "docker" against &{Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:14:35.443371  534325 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:14:35.444083  534325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:14:35.497494  534325 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:14:35.4876245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:14:35.497858  534325 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:14:35.497886  534325 cni.go:84] Creating CNI manager for ""
	I1123 10:14:35.497946  534325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:14:35.497992  534325 start.go:353] cluster config:
	{Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:14:35.503042  534325 out.go:179] * Starting "newest-cni-499584" primary control-plane node in "newest-cni-499584" cluster
	I1123 10:14:35.505793  534325 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 10:14:35.508757  534325 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:14:35.511894  534325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:14:35.511952  534325 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 10:14:35.511963  534325 cache.go:65] Caching tarball of preloaded images
	I1123 10:14:35.512065  534325 preload.go:238] Found /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1123 10:14:35.512076  534325 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 10:14:35.512186  534325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/config.json ...
	I1123 10:14:35.512284  534325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:14:35.537457  534325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:14:35.537482  534325 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:14:35.537505  534325 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:14:35.537537  534325 start.go:360] acquireMachinesLock for newest-cni-499584: {Name:mk060761daeb1a62836bf24a9b9e867393b1f580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:14:35.537611  534325 start.go:364] duration metric: took 51.693µs to acquireMachinesLock for "newest-cni-499584"
	I1123 10:14:35.537632  534325 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:14:35.537637  534325 fix.go:54] fixHost starting: 
	I1123 10:14:35.537894  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:35.553302  534325 fix.go:112] recreateIfNeeded on newest-cni-499584: state=Stopped err=<nil>
	W1123 10:14:35.553334  534325 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 10:14:35.397704  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	W1123 10:14:37.895831  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	I1123 10:14:35.556606  534325 out.go:252] * Restarting existing docker container for "newest-cni-499584" ...
	I1123 10:14:35.556691  534325 cli_runner.go:164] Run: docker start newest-cni-499584
	I1123 10:14:35.810485  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:35.841678  534325 kic.go:430] container "newest-cni-499584" state is running.
	I1123 10:14:35.842092  534325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:14:35.865371  534325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/config.json ...
	I1123 10:14:35.865636  534325 machine.go:94] provisionDockerMachine start ...
	I1123 10:14:35.865701  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:35.888754  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:35.889100  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:35.889118  534325 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:14:35.889757  534325 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:14:39.045105  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-499584
	
	I1123 10:14:39.045131  534325 ubuntu.go:182] provisioning hostname "newest-cni-499584"
	I1123 10:14:39.045248  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.063953  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:39.064263  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:39.064279  534325 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-499584 && echo "newest-cni-499584" | sudo tee /etc/hostname
	I1123 10:14:39.227033  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-499584
	
	I1123 10:14:39.227156  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.245578  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:39.245894  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:39.245917  534325 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-499584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-499584/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-499584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:14:39.399001  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:14:39.399024  534325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-282998/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-282998/.minikube}
	I1123 10:14:39.399055  534325 ubuntu.go:190] setting up certificates
	I1123 10:14:39.399073  534325 provision.go:84] configureAuth start
	I1123 10:14:39.399131  534325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:14:39.416579  534325 provision.go:143] copyHostCerts
	I1123 10:14:39.416662  534325 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem, removing ...
	I1123 10:14:39.416680  534325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem
	I1123 10:14:39.416761  534325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/cert.pem (1123 bytes)
	I1123 10:14:39.416866  534325 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem, removing ...
	I1123 10:14:39.416870  534325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem
	I1123 10:14:39.416899  534325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/key.pem (1679 bytes)
	I1123 10:14:39.416961  534325 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem, removing ...
	I1123 10:14:39.416967  534325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem
	I1123 10:14:39.416990  534325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-282998/.minikube/ca.pem (1078 bytes)
	I1123 10:14:39.417043  534325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem org=jenkins.newest-cni-499584 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-499584]
	I1123 10:14:39.627598  534325 provision.go:177] copyRemoteCerts
	I1123 10:14:39.627689  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:14:39.627763  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.647970  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:39.757042  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:14:39.774686  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:14:39.794086  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 10:14:39.812248  534325 provision.go:87] duration metric: took 413.152366ms to configureAuth
	I1123 10:14:39.812274  534325 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:14:39.812473  534325 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:39.812587  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:39.831627  534325 main.go:143] libmachine: Using SSH client type: native
	I1123 10:14:39.831936  534325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I1123 10:14:39.831959  534325 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 10:14:40.202065  534325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 10:14:40.202163  534325 machine.go:97] duration metric: took 4.336516564s to provisionDockerMachine
	I1123 10:14:40.202198  534325 start.go:293] postStartSetup for "newest-cni-499584" (driver="docker")
	I1123 10:14:40.202228  534325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:14:40.202328  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:14:40.202388  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.221920  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.329811  534325 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:14:40.333732  534325 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:14:40.333763  534325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:14:40.333774  534325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/addons for local assets ...
	I1123 10:14:40.333829  534325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-282998/.minikube/files for local assets ...
	I1123 10:14:40.333908  534325 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem -> 2849042.pem in /etc/ssl/certs
	I1123 10:14:40.334018  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:14:40.341710  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:14:40.360854  534325 start.go:296] duration metric: took 158.623442ms for postStartSetup
	I1123 10:14:40.360956  534325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:14:40.361017  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.378625  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.482834  534325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:14:40.488027  534325 fix.go:56] duration metric: took 4.950382019s for fixHost
	I1123 10:14:40.488055  534325 start.go:83] releasing machines lock for "newest-cni-499584", held for 4.950434147s
	I1123 10:14:40.488126  534325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-499584
	I1123 10:14:40.507445  534325 ssh_runner.go:195] Run: cat /version.json
	I1123 10:14:40.507515  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.507781  534325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:14:40.507851  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:40.526536  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.540744  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:40.633331  534325 ssh_runner.go:195] Run: systemctl --version
	I1123 10:14:40.735956  534325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 10:14:40.771974  534325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:14:40.776327  534325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:14:40.776407  534325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:14:40.784473  534325 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:14:40.784497  534325 start.go:496] detecting cgroup driver to use...
	I1123 10:14:40.784529  534325 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:14:40.784595  534325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 10:14:40.802198  534325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 10:14:40.815436  534325 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:14:40.815516  534325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:14:40.833773  534325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:14:40.847175  534325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:14:40.964031  534325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:14:41.079493  534325 docker.go:234] disabling docker service ...
	I1123 10:14:41.079592  534325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:14:41.099078  534325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:14:41.126997  534325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:14:41.246640  534325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:14:41.357516  534325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:14:41.371610  534325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:14:41.386018  534325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 10:14:41.386151  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.400270  534325 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1123 10:14:41.400380  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.410282  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.419540  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.429966  534325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:14:41.442207  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.452481  534325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.462238  534325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 10:14:41.472504  534325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:14:41.480544  534325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:14:41.488228  534325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:41.629522  534325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 10:14:41.808644  534325 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 10:14:41.808710  534325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 10:14:41.812474  534325 start.go:564] Will wait 60s for crictl version
	I1123 10:14:41.812551  534325 ssh_runner.go:195] Run: which crictl
	I1123 10:14:41.816298  534325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:14:41.846825  534325 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 10:14:41.846917  534325 ssh_runner.go:195] Run: crio --version
	I1123 10:14:41.875254  534325 ssh_runner.go:195] Run: crio --version
	I1123 10:14:41.910420  534325 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 10:14:41.913261  534325 cli_runner.go:164] Run: docker network inspect newest-cni-499584 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:14:41.928711  534325 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:14:41.932661  534325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:14:41.945351  534325 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1123 10:14:39.896516  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	W1123 10:14:41.896853  529379 pod_ready.go:104] pod "coredns-66bc5c9577-pphv6" is not "Ready", error: <nil>
	I1123 10:14:41.948184  534325 kubeadm.go:884] updating cluster {Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:14:41.948333  534325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 10:14:41.948404  534325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:14:41.983037  534325 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:14:41.983059  534325 crio.go:433] Images already preloaded, skipping extraction
	I1123 10:14:41.983122  534325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:14:42.012647  534325 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 10:14:42.012670  534325 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:14:42.012684  534325 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 10:14:42.012801  534325 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-499584 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:14:42.012913  534325 ssh_runner.go:195] Run: crio config
	I1123 10:14:42.085762  534325 cni.go:84] Creating CNI manager for ""
	I1123 10:14:42.085845  534325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 10:14:42.085887  534325 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 10:14:42.085940  534325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-499584 NodeName:newest-cni-499584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:14:42.086144  534325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-499584"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:14:42.086272  534325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:14:42.099771  534325 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:14:42.099872  534325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:14:42.109476  534325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 10:14:42.125982  534325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:14:42.143068  534325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1123 10:14:42.162444  534325 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:14:42.167161  534325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:14:42.179960  534325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:42.317087  534325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:14:42.336114  534325 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584 for IP: 192.168.76.2
	I1123 10:14:42.336139  534325 certs.go:195] generating shared ca certs ...
	I1123 10:14:42.336157  534325 certs.go:227] acquiring lock for ca certs: {Name:mk7909e2a1d0387673d6b2deba1a84fe3efafe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:42.336301  534325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key
	I1123 10:14:42.336359  534325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key
	I1123 10:14:42.336372  534325 certs.go:257] generating profile certs ...
	I1123 10:14:42.336466  534325 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/client.key
	I1123 10:14:42.336546  534325 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key.22d7de13
	I1123 10:14:42.336598  534325 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.key
	I1123 10:14:42.336725  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem (1338 bytes)
	W1123 10:14:42.336762  534325 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904_empty.pem, impossibly tiny 0 bytes
	I1123 10:14:42.336780  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:14:42.336809  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:14:42.336841  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:14:42.336874  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/certs/key.pem (1679 bytes)
	I1123 10:14:42.336925  534325 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem (1708 bytes)
	I1123 10:14:42.337678  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:14:42.363407  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 10:14:42.382696  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:14:42.404059  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:14:42.425457  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:14:42.449626  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:14:42.473741  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:14:42.503125  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/newest-cni-499584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:14:42.539613  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/certs/284904.pem --> /usr/share/ca-certificates/284904.pem (1338 bytes)
	I1123 10:14:42.564062  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/ssl/certs/2849042.pem --> /usr/share/ca-certificates/2849042.pem (1708 bytes)
	I1123 10:14:42.584177  534325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:14:42.606938  534325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:14:42.620683  534325 ssh_runner.go:195] Run: openssl version
	I1123 10:14:42.627518  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/284904.pem && ln -fs /usr/share/ca-certificates/284904.pem /etc/ssl/certs/284904.pem"
	I1123 10:14:42.636658  534325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/284904.pem
	I1123 10:14:42.640680  534325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 09:04 /usr/share/ca-certificates/284904.pem
	I1123 10:14:42.640792  534325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/284904.pem
	I1123 10:14:42.685708  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/284904.pem /etc/ssl/certs/51391683.0"
	I1123 10:14:42.696740  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2849042.pem && ln -fs /usr/share/ca-certificates/2849042.pem /etc/ssl/certs/2849042.pem"
	I1123 10:14:42.705991  534325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2849042.pem
	I1123 10:14:42.709884  534325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 09:04 /usr/share/ca-certificates/2849042.pem
	I1123 10:14:42.709962  534325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2849042.pem
	I1123 10:14:42.750861  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2849042.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:14:42.759122  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:14:42.767889  534325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:42.772006  534325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:42.772075  534325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:14:42.814113  534325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:14:42.823220  534325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:14:42.827234  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:14:42.869026  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:14:42.913588  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:14:42.969491  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:14:43.021834  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:14:43.089424  534325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:14:43.174105  534325 kubeadm.go:401] StartCluster: {Name:newest-cni-499584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-499584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:14:43.174258  534325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 10:14:43.174371  534325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:14:43.253263  534325 cri.go:89] found id: "55951d0d04b4f47b2d5b5cf62ccc211475fb562db7126dbfd7b727861257eac0"
	I1123 10:14:43.253335  534325 cri.go:89] found id: "29bf860a34581ef12a5c2e695cb5c4f9bee91e4dfc153fd656e57d8c48fa1f90"
	I1123 10:14:43.253355  534325 cri.go:89] found id: "5ba284305a4abd603bf7200240b618f02ce262119d49e43b0da6cf7313bbc7be"
	I1123 10:14:43.253381  534325 cri.go:89] found id: "7ef630ea40b951127d767ee2e09ebb4700a9b36e54474665707cf2be5860d032"
	I1123 10:14:43.253424  534325 cri.go:89] found id: ""
	I1123 10:14:43.253525  534325 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 10:14:43.275277  534325 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T10:14:43Z" level=error msg="open /run/runc: no such file or directory"
	I1123 10:14:43.275413  534325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:14:43.288015  534325 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:14:43.288090  534325 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:14:43.288184  534325 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:14:43.300089  534325 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:14:43.300769  534325 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-499584" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:14:43.301095  534325 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-282998/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-499584" cluster setting kubeconfig missing "newest-cni-499584" context setting]
	I1123 10:14:43.301637  534325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:43.303525  534325 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:14:43.315357  534325 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 10:14:43.315438  534325 kubeadm.go:602] duration metric: took 27.319463ms to restartPrimaryControlPlane
	I1123 10:14:43.315462  534325 kubeadm.go:403] duration metric: took 141.368104ms to StartCluster
	I1123 10:14:43.315508  534325 settings.go:142] acquiring lock: {Name:mk21f4e12498409c3260b2be7accf2403e14ae02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:43.315602  534325 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 10:14:43.316666  534325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/kubeconfig: {Name:mk95463383f8aa50824b49faf7622cd42aa59a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:14:43.316959  534325 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 10:14:43.317718  534325 config.go:182] Loaded profile config "newest-cni-499584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 10:14:43.317697  534325 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:14:43.317796  534325 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-499584"
	I1123 10:14:43.317809  534325 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-499584"
	W1123 10:14:43.317818  534325 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:14:43.317823  534325 addons.go:70] Setting dashboard=true in profile "newest-cni-499584"
	I1123 10:14:43.317850  534325 addons.go:70] Setting default-storageclass=true in profile "newest-cni-499584"
	I1123 10:14:43.317863  534325 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-499584"
	I1123 10:14:43.317855  534325 addons.go:239] Setting addon dashboard=true in "newest-cni-499584"
	W1123 10:14:43.317900  534325 addons.go:248] addon dashboard should already be in state true
	I1123 10:14:43.317929  534325 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:43.318191  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.317844  534325 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:43.319197  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.319350  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.322853  534325 out.go:179] * Verifying Kubernetes components...
	I1123 10:14:43.326101  534325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:14:43.365483  534325 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:14:43.375618  534325 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:14:43.380627  534325 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:14:43.413785  529379 pod_ready.go:94] pod "coredns-66bc5c9577-pphv6" is "Ready"
	I1123 10:14:43.413811  529379 pod_ready.go:86] duration metric: took 36.023476036s for pod "coredns-66bc5c9577-pphv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.423051  529379 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.472047  529379 pod_ready.go:94] pod "etcd-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:43.472071  529379 pod_ready.go:86] duration metric: took 48.99566ms for pod "etcd-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.480493  529379 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.497900  529379 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:43.497926  529379 pod_ready.go:86] duration metric: took 17.40953ms for pod "kube-apiserver-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.501812  529379 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.594337  529379 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:43.594360  529379 pod_ready.go:86] duration metric: took 92.526931ms for pod "kube-controller-manager-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:43.793574  529379 pod_ready.go:83] waiting for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.193846  529379 pod_ready.go:94] pod "kube-proxy-75qqt" is "Ready"
	I1123 10:14:44.193870  529379 pod_ready.go:86] duration metric: took 400.271598ms for pod "kube-proxy-75qqt" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.394519  529379 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.794600  529379 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-330197" is "Ready"
	I1123 10:14:44.794624  529379 pod_ready.go:86] duration metric: took 400.080817ms for pod "kube-scheduler-default-k8s-diff-port-330197" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:14:44.794638  529379 pod_ready.go:40] duration metric: took 37.408453068s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:14:44.888790  529379 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:14:44.892050  529379 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-330197" cluster and "default" namespace by default
	I1123 10:14:43.380644  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:14:43.380714  534325 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:14:43.380780  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:43.384089  534325 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:14:43.384112  534325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:14:43.384176  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:43.393141  534325 addons.go:239] Setting addon default-storageclass=true in "newest-cni-499584"
	W1123 10:14:43.393166  534325 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:14:43.393191  534325 host.go:66] Checking if "newest-cni-499584" exists ...
	I1123 10:14:43.393629  534325 cli_runner.go:164] Run: docker container inspect newest-cni-499584 --format={{.State.Status}}
	I1123 10:14:43.426018  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:43.454818  534325 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:14:43.454840  534325 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:14:43.454905  534325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-499584
	I1123 10:14:43.477805  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:43.499014  534325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/newest-cni-499584/id_rsa Username:docker}
	I1123 10:14:43.684857  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:14:43.684931  534325 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:14:43.710152  534325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:14:43.742808  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:14:43.742881  534325 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:14:43.748309  534325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:14:43.767832  534325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:14:43.777593  534325 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:14:43.777753  534325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:14:43.819844  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:14:43.819915  534325 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:14:43.906458  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:14:43.906530  534325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:14:43.961147  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:14:43.961219  534325 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:14:44.045109  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:14:44.045187  534325 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:14:44.085266  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:14:44.085342  534325 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:14:44.122397  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:14:44.122478  534325 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:14:44.149167  534325 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:14:44.149243  534325 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:14:44.176014  534325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:14:49.874831  534325 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.126442363s)
	I1123 10:14:49.874895  534325 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.106997388s)
	I1123 10:14:49.875220  534325 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.097427038s)
	I1123 10:14:49.875249  534325 api_server.go:72] duration metric: took 6.558232564s to wait for apiserver process to appear ...
	I1123 10:14:49.875256  534325 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:14:49.875268  534325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:14:49.875552  534325 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.699461585s)
	I1123 10:14:49.878710  534325 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-499584 addons enable metrics-server
	
	I1123 10:14:49.899521  534325 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:14:49.901259  534325 api_server.go:141] control plane version: v1.34.1
	I1123 10:14:49.901287  534325 api_server.go:131] duration metric: took 26.024714ms to wait for apiserver health ...
	I1123 10:14:49.901296  534325 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:14:49.907916  534325 system_pods.go:59] 8 kube-system pods found
	I1123 10:14:49.907956  534325 system_pods.go:61] "coredns-66bc5c9577-gpv4n" [3ac78ff6-250d-4ce6-ba6f-913ba5a46be8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:14:49.907965  534325 system_pods.go:61] "etcd-newest-cni-499584" [fbc5fde9-9d75-41ee-a27e-bea9e43c5c1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 10:14:49.907971  534325 system_pods.go:61] "kindnet-8pwmm" [3933503c-90da-4b79-98e7-e4a22d58813d] Running
	I1123 10:14:49.907978  534325 system_pods.go:61] "kube-apiserver-newest-cni-499584" [2a4c121c-305b-4eef-8b3a-127a1fef8812] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 10:14:49.907985  534325 system_pods.go:61] "kube-controller-manager-newest-cni-499584" [c00e062c-870f-4ed7-a05d-615fc6c7d81d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 10:14:49.907989  534325 system_pods.go:61] "kube-proxy-7ccmv" [8dace15f-cf56-4d36-9840-ceb07d85b8b0] Running
	I1123 10:14:49.907995  534325 system_pods.go:61] "kube-scheduler-newest-cni-499584" [94684fe3-8d3e-4f48-9dad-6f0c6414f3c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 10:14:49.908028  534325 system_pods.go:61] "storage-provisioner" [70f72df9-2a87-468c-9f4c-2df81d587a29] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 10:14:49.908042  534325 system_pods.go:74] duration metric: took 6.740578ms to wait for pod list to return data ...
	I1123 10:14:49.908051  534325 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:14:49.908970  534325 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 10:14:49.911793  534325 addons.go:530] duration metric: took 6.594095134s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 10:14:49.916336  534325 default_sa.go:45] found service account: "default"
	I1123 10:14:49.916411  534325 default_sa.go:55] duration metric: took 8.348899ms for default service account to be created ...
	I1123 10:14:49.916442  534325 kubeadm.go:587] duration metric: took 6.59942349s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 10:14:49.916485  534325 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:14:49.919381  534325 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:14:49.919471  534325 node_conditions.go:123] node cpu capacity is 2
	I1123 10:14:49.919500  534325 node_conditions.go:105] duration metric: took 2.99226ms to run NodePressure ...
	I1123 10:14:49.919527  534325 start.go:242] waiting for startup goroutines ...
	I1123 10:14:49.919552  534325 start.go:247] waiting for cluster config update ...
	I1123 10:14:49.919581  534325 start.go:256] writing updated cluster config ...
	I1123 10:14:49.919880  534325 ssh_runner.go:195] Run: rm -f paused
	I1123 10:14:50.006509  534325 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:14:50.010080  534325 out.go:179] * Done! kubectl is now configured to use "newest-cni-499584" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.096562018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.116516359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.117299108Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.139068953Z" level=info msg="Created container 991ccbc0c6f8554770578bb5b28255043887809251365e1433a8fef879d23513: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82/dashboard-metrics-scraper" id=c91d5bb8-fe55-488a-97b6-10cc61c2637a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.144277158Z" level=info msg="Starting container: 991ccbc0c6f8554770578bb5b28255043887809251365e1433a8fef879d23513" id=cfceedd1-784c-4b3a-8ff2-1b965a286229 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.148054884Z" level=info msg="Started container" PID=1646 containerID=991ccbc0c6f8554770578bb5b28255043887809251365e1433a8fef879d23513 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82/dashboard-metrics-scraper id=cfceedd1-784c-4b3a-8ff2-1b965a286229 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c6f0b3efbeae5e54329a1d042597ae5acd1e34f439dd95128728d63d5022d59c
	Nov 23 10:14:41 default-k8s-diff-port-330197 conmon[1644]: conmon 991ccbc0c6f855477057 <ninfo>: container 1646 exited with status 1
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.538052584Z" level=info msg="Removing container: f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e" id=69c24e5a-6d7d-4069-8697-a9b0c0fa0e37 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.545661861Z" level=info msg="Error loading conmon cgroup of container f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e: cgroup deleted" id=69c24e5a-6d7d-4069-8697-a9b0c0fa0e37 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:14:41 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:41.548700358Z" level=info msg="Removed container f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82/dashboard-metrics-scraper" id=69c24e5a-6d7d-4069-8697-a9b0c0fa0e37 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.417900769Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.422173066Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.422332248Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.422415301Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.429344223Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.42937674Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.429397344Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.432753688Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.432894761Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.432980653Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.438954983Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.4391178Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.439196275Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.445257532Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 10:14:46 default-k8s-diff-port-330197 crio[652]: time="2025-11-23T10:14:46.445293069Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	991ccbc0c6f85       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   c6f0b3efbeae5       dashboard-metrics-scraper-6ffb444bf9-zzt82             kubernetes-dashboard
	87de444d24b2f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   1490e7d0cba3f       storage-provisioner                                    kube-system
	872344c350f1c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   93396b70f5a0c       kubernetes-dashboard-855c9754f9-8wqtw                  kubernetes-dashboard
	97276357e27cf       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   ebc85df750f0b       coredns-66bc5c9577-pphv6                               kube-system
	0fc5d9f48de90       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   c566caa8cf467       busybox                                                default
	89a95d5fe671d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   1490e7d0cba3f       storage-provisioner                                    kube-system
	fab3c1340bb9f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   96b0a3f83b264       kindnet-wfv8n                                          kube-system
	60df752bda6db       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   51ac7c37980e8       kube-proxy-75qqt                                       kube-system
	42cc19608c6e5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   54f8f350c1a75       kube-controller-manager-default-k8s-diff-port-330197   kube-system
	f6adced2438dd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   7fc2bc5f7893d       kube-scheduler-default-k8s-diff-port-330197            kube-system
	49080a105e3a1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   4ac440c0b42f1       kube-apiserver-default-k8s-diff-port-330197            kube-system
	fe2851bd5d0e2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   764501f445bb1       etcd-default-k8s-diff-port-330197                      kube-system
	
	
	==> coredns [97276357e27cf30604562b859301a5b21e5e2d2302ad432fc575ea7916ac030f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35438 - 6294 "HINFO IN 4520195747009942529.4274652798983577699. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042713204s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-330197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-330197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=default-k8s-diff-port-330197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_12_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:12:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-330197
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:14:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:14:44 +0000   Sun, 23 Nov 2025 10:12:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:14:44 +0000   Sun, 23 Nov 2025 10:12:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:14:44 +0000   Sun, 23 Nov 2025 10:12:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:14:44 +0000   Sun, 23 Nov 2025 10:13:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-330197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                9fb10197-c662-4288-a6e4-d39f9ec1d57e
	  Boot ID:                    728df74d-5f50-461c-8d62-9d80cc778630
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-pphv6                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m25s
	  kube-system                 etcd-default-k8s-diff-port-330197                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m29s
	  kube-system                 kindnet-wfv8n                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-330197             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-330197    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-75qqt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-330197             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zzt82              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8wqtw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m22s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   Starting                 2m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m26s                  node-controller  Node default-k8s-diff-port-330197 event: Registered Node default-k8s-diff-port-330197 in Controller
	  Normal   NodeReady                102s                   kubelet          Node default-k8s-diff-port-330197 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-330197 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node default-k8s-diff-port-330197 event: Registered Node default-k8s-diff-port-330197 in Controller
	
	
	==> dmesg <==
	[Nov23 09:52] overlayfs: idmapped layers are currently not supported
	[  +2.264882] overlayfs: idmapped layers are currently not supported
	[Nov23 09:53] overlayfs: idmapped layers are currently not supported
	[Nov23 09:54] overlayfs: idmapped layers are currently not supported
	[  +7.193769] overlayfs: idmapped layers are currently not supported
	[Nov23 09:55] overlayfs: idmapped layers are currently not supported
	[ +37.914778] overlayfs: idmapped layers are currently not supported
	[Nov23 09:56] overlayfs: idmapped layers are currently not supported
	[Nov23 10:01] overlayfs: idmapped layers are currently not supported
	[Nov23 10:02] overlayfs: idmapped layers are currently not supported
	[Nov23 10:03] overlayfs: idmapped layers are currently not supported
	[Nov23 10:04] overlayfs: idmapped layers are currently not supported
	[Nov23 10:05] overlayfs: idmapped layers are currently not supported
	[Nov23 10:06] overlayfs: idmapped layers are currently not supported
	[Nov23 10:07] overlayfs: idmapped layers are currently not supported
	[Nov23 10:08] overlayfs: idmapped layers are currently not supported
	[Nov23 10:09] overlayfs: idmapped layers are currently not supported
	[ +22.736452] overlayfs: idmapped layers are currently not supported
	[Nov23 10:10] overlayfs: idmapped layers are currently not supported
	[Nov23 10:11] overlayfs: idmapped layers are currently not supported
	[Nov23 10:12] overlayfs: idmapped layers are currently not supported
	[ +16.378417] overlayfs: idmapped layers are currently not supported
	[Nov23 10:13] overlayfs: idmapped layers are currently not supported
	[Nov23 10:14] overlayfs: idmapped layers are currently not supported
	[ +29.685025] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fe2851bd5d0e209023685855c54c561683dab32a8f4e2ac4aad2e94044d6da28] <==
	{"level":"warn","ts":"2025-11-23T10:14:01.676462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.699141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.720708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.738710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.790490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.803302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.822315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.845454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.862222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.876893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.906337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.939714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:01.971967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.042615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.043767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.061760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.079296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.106512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.125904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.140077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:14:02.239767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58586","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T10:14:05.826506Z","caller":"traceutil/trace.go:172","msg":"trace[1747657151] linearizableReadLoop","detail":"{readStateIndex:570; appliedIndex:570; }","duration":"120.628678ms","start":"2025-11-23T10:14:05.705857Z","end":"2025-11-23T10:14:05.826486Z","steps":["trace[1747657151] 'read index received'  (duration: 120.622648ms)","trace[1747657151] 'applied index is now lower than readState.Index'  (duration: 5.194µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T10:14:05.827292Z","caller":"traceutil/trace.go:172","msg":"trace[1733730109] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"133.416566ms","start":"2025-11-23T10:14:05.693858Z","end":"2025-11-23T10:14:05.827274Z","steps":["trace[1733730109] 'process raft request'  (duration: 132.949153ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T10:14:05.829049Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.121427ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:replication-controller\" limit:1 ","response":"range_response_count:1 size:763"}
	{"level":"info","ts":"2025-11-23T10:14:05.829124Z","caller":"traceutil/trace.go:172","msg":"trace[211356073] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:replication-controller; range_end:; response_count:1; response_revision:540; }","duration":"123.236622ms","start":"2025-11-23T10:14:05.705852Z","end":"2025-11-23T10:14:05.829089Z","steps":["trace[211356073] 'agreement among raft nodes before linearized reading'  (duration: 120.693557ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:15:03 up  2:57,  0 user,  load average: 6.00, 5.00, 3.85
	Linux default-k8s-diff-port-330197 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fab3c1340bb9fc1913f94cded9a1f0fba5136e42c2593fb7823cb21f94b031c1] <==
	I1123 10:14:06.026199       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:14:06.050885       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 10:14:06.051035       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:14:06.051048       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:14:06.051064       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:14:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:14:06.417783       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:14:06.417798       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:14:06.417806       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:14:06.418897       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:14:36.418905       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 10:14:36.418911       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 10:14:36.418994       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:14:36.419027       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 10:14:37.817941       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:14:37.818079       1 metrics.go:72] Registering metrics
	I1123 10:14:37.818186       1 controller.go:711] "Syncing nftables rules"
	I1123 10:14:46.417485       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:14:46.417630       1 main.go:301] handling current node
	I1123 10:14:56.425506       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:14:56.425541       1 main.go:301] handling current node
	
	
	==> kube-apiserver [49080a105e3a1028d971c78fae51a027ca689e779aae2b400ed02b743c540042] <==
	I1123 10:14:03.560384       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 10:14:03.560450       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 10:14:03.561015       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 10:14:03.564987       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 10:14:03.565213       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:14:03.682200       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 10:14:03.682224       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 10:14:03.682391       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 10:14:03.682637       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 10:14:03.707802       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 10:14:03.707842       1 policy_source.go:240] refreshing policies
	I1123 10:14:03.723572       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:14:03.776632       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1123 10:14:03.881784       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 10:14:04.094512       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:14:04.241328       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:14:06.296137       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:14:06.341462       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:14:06.429554       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:14:06.464830       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:14:06.839325       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.174.153"}
	I1123 10:14:06.918484       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.7.70"}
	I1123 10:14:08.182839       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:14:08.234638       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:14:08.281707       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [42cc19608c6e58ebf338dc82a991b4cd9902c09d76a2fc3ad1709fb98fe71f1c] <==
	I1123 10:14:07.867193       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:14:07.867228       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:14:07.873819       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:14:07.873869       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:14:07.874022       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 10:14:07.876734       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 10:14:07.878629       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:14:07.881456       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 10:14:07.881629       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 10:14:07.887357       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:14:07.888653       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:14:07.892065       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:14:07.892134       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:14:07.894083       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:14:07.894181       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:14:07.894265       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-330197"
	I1123 10:14:07.894331       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 10:14:07.894806       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:14:07.901492       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:14:07.901660       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:14:07.909523       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:14:07.929458       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:14:07.943231       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:14:07.943258       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 10:14:07.943267       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [60df752bda6db8b92e0f147182a3bba7647274349456a921663b7a71421bb064] <==
	I1123 10:14:06.195338       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:14:07.003392       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:14:07.203960       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:14:07.204005       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 10:14:07.204072       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:14:07.366349       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:14:07.373603       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:14:07.411298       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:14:07.411653       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:14:07.415106       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:14:07.416435       1 config.go:200] "Starting service config controller"
	I1123 10:14:07.416457       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:14:07.416473       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:14:07.416484       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:14:07.416505       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:14:07.416513       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:14:07.417151       1 config.go:309] "Starting node config controller"
	I1123 10:14:07.417168       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:14:07.417175       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:14:07.517105       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:14:07.517156       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:14:07.517194       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f6adced2438dde36562063e35389aaa6f93406583a489e9200e01abeac6d2ba2] <==
	I1123 10:13:58.346567       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:14:03.209654       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:14:03.209747       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:14:03.209781       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:14:03.209811       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:14:03.471403       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:14:03.471431       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:14:03.486371       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:14:03.486492       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:14:03.486509       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:14:03.486525       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 10:14:03.699990       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:14:08 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:08.598504     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxjtq\" (UniqueName: \"kubernetes.io/projected/90da865e-da95-477f-8c9d-e94af6db5c3b-kube-api-access-dxjtq\") pod \"dashboard-metrics-scraper-6ffb444bf9-zzt82\" (UID: \"90da865e-da95-477f-8c9d-e94af6db5c3b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82"
	Nov 23 10:14:08 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:08.598576     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6n7q\" (UniqueName: \"kubernetes.io/projected/eb24f5e7-c61d-442a-91a6-e5d5c11eb288-kube-api-access-s6n7q\") pod \"kubernetes-dashboard-855c9754f9-8wqtw\" (UID: \"eb24f5e7-c61d-442a-91a6-e5d5c11eb288\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8wqtw"
	Nov 23 10:14:08 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:08.598599     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eb24f5e7-c61d-442a-91a6-e5d5c11eb288-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8wqtw\" (UID: \"eb24f5e7-c61d-442a-91a6-e5d5c11eb288\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8wqtw"
	Nov 23 10:14:08 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:08.598639     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/90da865e-da95-477f-8c9d-e94af6db5c3b-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zzt82\" (UID: \"90da865e-da95-477f-8c9d-e94af6db5c3b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82"
	Nov 23 10:14:08 default-k8s-diff-port-330197 kubelet[781]: W1123 10:14:08.838854     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/001c54c15317ff75346e76f1617e468bf19711aab38f9ddafa0c3cb644d02c1c/crio-c6f0b3efbeae5e54329a1d042597ae5acd1e34f439dd95128728d63d5022d59c WatchSource:0}: Error finding container c6f0b3efbeae5e54329a1d042597ae5acd1e34f439dd95128728d63d5022d59c: Status 404 returned error can't find the container with id c6f0b3efbeae5e54329a1d042597ae5acd1e34f439dd95128728d63d5022d59c
	Nov 23 10:14:13 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:13.039860     781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 10:14:23 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:23.473339     781 scope.go:117] "RemoveContainer" containerID="6e223668723aafd868ca4d75a4713421daa150df1acde72152872a23a87d1dc9"
	Nov 23 10:14:23 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:23.500306     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8wqtw" podStartSLOduration=8.305166818 podStartE2EDuration="15.500288071s" podCreationTimestamp="2025-11-23 10:14:08 +0000 UTC" firstStartedPulling="2025-11-23 10:14:08.814413958 +0000 UTC m=+12.872608349" lastFinishedPulling="2025-11-23 10:14:16.009535211 +0000 UTC m=+20.067729602" observedRunningTime="2025-11-23 10:14:16.47704903 +0000 UTC m=+20.535243429" watchObservedRunningTime="2025-11-23 10:14:23.500288071 +0000 UTC m=+27.558482470"
	Nov 23 10:14:24 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:24.477072     781 scope.go:117] "RemoveContainer" containerID="6e223668723aafd868ca4d75a4713421daa150df1acde72152872a23a87d1dc9"
	Nov 23 10:14:24 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:24.477839     781 scope.go:117] "RemoveContainer" containerID="f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e"
	Nov 23 10:14:24 default-k8s-diff-port-330197 kubelet[781]: E1123 10:14:24.478111     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zzt82_kubernetes-dashboard(90da865e-da95-477f-8c9d-e94af6db5c3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82" podUID="90da865e-da95-477f-8c9d-e94af6db5c3b"
	Nov 23 10:14:25 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:25.480968     781 scope.go:117] "RemoveContainer" containerID="f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e"
	Nov 23 10:14:25 default-k8s-diff-port-330197 kubelet[781]: E1123 10:14:25.481144     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zzt82_kubernetes-dashboard(90da865e-da95-477f-8c9d-e94af6db5c3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82" podUID="90da865e-da95-477f-8c9d-e94af6db5c3b"
	Nov 23 10:14:28 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:28.796752     781 scope.go:117] "RemoveContainer" containerID="f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e"
	Nov 23 10:14:28 default-k8s-diff-port-330197 kubelet[781]: E1123 10:14:28.796931     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zzt82_kubernetes-dashboard(90da865e-da95-477f-8c9d-e94af6db5c3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82" podUID="90da865e-da95-477f-8c9d-e94af6db5c3b"
	Nov 23 10:14:36 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:36.516238     781 scope.go:117] "RemoveContainer" containerID="89a95d5fe671d4bfa2f423fba99ec7d957ff464f9e3b91c5863c5d7913e94d04"
	Nov 23 10:14:41 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:41.093783     781 scope.go:117] "RemoveContainer" containerID="f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e"
	Nov 23 10:14:41 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:41.532246     781 scope.go:117] "RemoveContainer" containerID="f1d50ecf0c6fe034966d82e9cc11ed2015e8e2c5ec1f4d71e574d03604c8d48e"
	Nov 23 10:14:41 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:41.533964     781 scope.go:117] "RemoveContainer" containerID="991ccbc0c6f8554770578bb5b28255043887809251365e1433a8fef879d23513"
	Nov 23 10:14:41 default-k8s-diff-port-330197 kubelet[781]: E1123 10:14:41.534276     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zzt82_kubernetes-dashboard(90da865e-da95-477f-8c9d-e94af6db5c3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82" podUID="90da865e-da95-477f-8c9d-e94af6db5c3b"
	Nov 23 10:14:48 default-k8s-diff-port-330197 kubelet[781]: I1123 10:14:48.796583     781 scope.go:117] "RemoveContainer" containerID="991ccbc0c6f8554770578bb5b28255043887809251365e1433a8fef879d23513"
	Nov 23 10:14:48 default-k8s-diff-port-330197 kubelet[781]: E1123 10:14:48.797819     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zzt82_kubernetes-dashboard(90da865e-da95-477f-8c9d-e94af6db5c3b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zzt82" podUID="90da865e-da95-477f-8c9d-e94af6db5c3b"
	Nov 23 10:14:57 default-k8s-diff-port-330197 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 10:14:57 default-k8s-diff-port-330197 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 10:14:57 default-k8s-diff-port-330197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [872344c350f1c0db76811cd62d9d7adaa803f3c0d3efcaf1a806e4f1fc4df822] <==
	2025/11/23 10:14:16 Using namespace: kubernetes-dashboard
	2025/11/23 10:14:16 Using in-cluster config to connect to apiserver
	2025/11/23 10:14:16 Using secret token for csrf signing
	2025/11/23 10:14:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 10:14:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 10:14:16 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 10:14:16 Generating JWE encryption key
	2025/11/23 10:14:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 10:14:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 10:14:17 Initializing JWE encryption key from synchronized object
	2025/11/23 10:14:17 Creating in-cluster Sidecar client
	2025/11/23 10:14:17 Serving insecurely on HTTP port: 9090
	2025/11/23 10:14:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:14:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 10:14:16 Starting overwatch
	
	
	==> storage-provisioner [87de444d24b2febb76dda7b50f414db46d89c0d8fc63cbf46209d99a0e01672d] <==
	I1123 10:14:36.569673       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:14:36.583797       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:14:36.583854       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:14:36.586302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:40.042664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:44.303432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:47.902390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:50.957023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:53.979595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:53.984577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:14:53.984765       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:14:53.984967       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-330197_591ba971-05d3-40bc-bfbc-0ace58a0e4e6!
	I1123 10:14:53.985722       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c776180-63cd-4909-9a5b-31f492baafc6", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-330197_591ba971-05d3-40bc-bfbc-0ace58a0e4e6 became leader
	W1123 10:14:53.989100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:53.996203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:14:54.085567       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-330197_591ba971-05d3-40bc-bfbc-0ace58a0e4e6!
	W1123 10:14:55.999592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:56.006351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:58.010569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:14:58.019524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:15:00.023985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:15:00.048206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:15:02.067902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:15:02.073588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [89a95d5fe671d4bfa2f423fba99ec7d957ff464f9e3b91c5863c5d7913e94d04] <==
	I1123 10:14:06.283392       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 10:14:36.289675       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-330197 -n default-k8s-diff-port-330197
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-330197 -n default-k8s-diff-port-330197: exit status 2 (379.027165ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-330197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.17s)

                                                
                                    

Test pass (259/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.86
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.41
9 TestDownloadOnly/v1.28.0/DeleteAll 0.37
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.34.1/json-events 5.2
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 160.73
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 8.89
48 TestAddons/StoppedEnableDisable 12.45
49 TestCertOptions 36.78
50 TestCertExpiration 242.96
52 TestForceSystemdFlag 45.44
53 TestForceSystemdEnv 40.64
58 TestErrorSpam/setup 33.21
59 TestErrorSpam/start 0.83
60 TestErrorSpam/status 1.15
61 TestErrorSpam/pause 5.81
62 TestErrorSpam/unpause 5.96
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 78.65
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 40.58
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.61
75 TestFunctional/serial/CacheCmd/cache/add_local 1.1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 37.85
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.46
86 TestFunctional/serial/LogsFileCmd 1.51
87 TestFunctional/serial/InvalidService 4.12
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 8.41
91 TestFunctional/parallel/DryRun 0.44
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.04
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 25.05
101 TestFunctional/parallel/SSHCmd 0.71
102 TestFunctional/parallel/CpCmd 2.1
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.78
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
113 TestFunctional/parallel/License 0.35
114 TestFunctional/parallel/Version/short 0.08
115 TestFunctional/parallel/Version/components 1.2
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
122 TestFunctional/parallel/ImageCommands/ImageBuild 4.11
123 TestFunctional/parallel/ImageCommands/Setup 0.77
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.48
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
135 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.1
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
145 TestFunctional/parallel/ProfileCmd/profile_list 0.45
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
147 TestFunctional/parallel/MountCmd/any-port 6.84
148 TestFunctional/parallel/MountCmd/specific-port 1.92
149 TestFunctional/parallel/MountCmd/VerifyCleanup 2.02
150 TestFunctional/parallel/ServiceCmd/List 1.41
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.42
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 177.97
163 TestMultiControlPlane/serial/DeployApp 6.65
164 TestMultiControlPlane/serial/PingHostFromPods 1.48
165 TestMultiControlPlane/serial/AddWorkerNode 59.86
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
168 TestMultiControlPlane/serial/CopyFile 20.16
169 TestMultiControlPlane/serial/StopSecondaryNode 13.03
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
171 TestMultiControlPlane/serial/RestartSecondaryNode 29.41
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.45
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.6
176 TestMultiControlPlane/serial/StopCluster 36.15
177 TestMultiControlPlane/serial/RestartCluster 85.97
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
179 TestMultiControlPlane/serial/AddSecondaryNode 80.5
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.45
185 TestJSONOutput/start/Command 81.97
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.86
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 41.7
211 TestKicCustomNetwork/use_default_bridge_network 34.86
212 TestKicExistingNetwork 37.08
213 TestKicCustomSubnet 37.54
214 TestKicStaticIP 40.03
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 73.43
219 TestMountStart/serial/StartWithMountFirst 8.9
220 TestMountStart/serial/VerifyMountFirst 0.29
221 TestMountStart/serial/StartWithMountSecond 9.03
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.31
226 TestMountStart/serial/RestartStopped 7.79
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 136.88
231 TestMultiNode/serial/DeployApp2Nodes 5.37
232 TestMultiNode/serial/PingHostFrom2Pods 0.98
233 TestMultiNode/serial/AddNode 57.61
234 TestMultiNode/serial/MultiNodeLabels 0.12
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.52
237 TestMultiNode/serial/StopNode 2.44
238 TestMultiNode/serial/StartAfterStop 8.46
239 TestMultiNode/serial/RestartKeepsNodes 79.85
240 TestMultiNode/serial/DeleteNode 5.68
241 TestMultiNode/serial/StopMultiNode 23.96
242 TestMultiNode/serial/RestartMultiNode 48.71
243 TestMultiNode/serial/ValidateNameConflict 37.13
248 TestPreload 129.63
250 TestScheduledStopUnix 108.15
253 TestInsufficientStorage 13.68
254 TestRunningBinaryUpgrade 63.91
256 TestKubernetesUpgrade 352.45
257 TestMissingContainerUpgrade 104.87
259 TestPause/serial/Start 90.88
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
262 TestNoKubernetes/serial/StartWithK8s 45.22
263 TestNoKubernetes/serial/StartWithStopK8s 6.96
264 TestNoKubernetes/serial/Start 8.01
265 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
267 TestNoKubernetes/serial/ProfileList 1.12
268 TestNoKubernetes/serial/Stop 1.32
269 TestNoKubernetes/serial/StartNoArgs 6.83
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
278 TestNetworkPlugins/group/false 3.72
282 TestPause/serial/SecondStartNoReconfiguration 27.51
284 TestStoppedBinaryUpgrade/Setup 0.81
285 TestStoppedBinaryUpgrade/Upgrade 66.83
293 TestNetworkPlugins/group/auto/Start 88.49
294 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
295 TestNetworkPlugins/group/kindnet/Start 81.2
296 TestNetworkPlugins/group/auto/KubeletFlags 0.38
297 TestNetworkPlugins/group/auto/NetCatPod 11.43
298 TestNetworkPlugins/group/auto/DNS 0.21
299 TestNetworkPlugins/group/auto/Localhost 0.15
300 TestNetworkPlugins/group/auto/HairPin 0.14
301 TestNetworkPlugins/group/flannel/Start 58.86
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
304 TestNetworkPlugins/group/kindnet/NetCatPod 13.27
305 TestNetworkPlugins/group/kindnet/DNS 0.16
306 TestNetworkPlugins/group/kindnet/Localhost 0.18
307 TestNetworkPlugins/group/kindnet/HairPin 0.15
308 TestNetworkPlugins/group/enable-default-cni/Start 82.68
309 TestNetworkPlugins/group/flannel/ControllerPod 6
310 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
311 TestNetworkPlugins/group/flannel/NetCatPod 10.25
312 TestNetworkPlugins/group/flannel/DNS 0.23
313 TestNetworkPlugins/group/flannel/Localhost 0.15
314 TestNetworkPlugins/group/flannel/HairPin 0.17
315 TestNetworkPlugins/group/bridge/Start 80.98
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.31
318 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
319 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
320 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
321 TestNetworkPlugins/group/custom-flannel/Start 57.58
322 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
323 TestNetworkPlugins/group/bridge/NetCatPod 11.36
324 TestNetworkPlugins/group/bridge/DNS 0.22
325 TestNetworkPlugins/group/bridge/Localhost 0.19
326 TestNetworkPlugins/group/bridge/HairPin 0.17
327 TestNetworkPlugins/group/calico/Start 77.87
328 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
329 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.27
330 TestNetworkPlugins/group/custom-flannel/DNS 0.2
331 TestNetworkPlugins/group/custom-flannel/Localhost 0.33
332 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
334 TestStartStop/group/old-k8s-version/serial/FirstStart 66.19
335 TestNetworkPlugins/group/calico/ControllerPod 6.01
336 TestNetworkPlugins/group/calico/KubeletFlags 0.4
337 TestNetworkPlugins/group/calico/NetCatPod 12.34
338 TestNetworkPlugins/group/calico/DNS 0.18
339 TestNetworkPlugins/group/calico/Localhost 0.16
340 TestNetworkPlugins/group/calico/HairPin 0.2
341 TestStartStop/group/old-k8s-version/serial/DeployApp 11.52
343 TestStartStop/group/no-preload/serial/FirstStart 73.88
345 TestStartStop/group/old-k8s-version/serial/Stop 12.91
346 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.58
347 TestStartStop/group/old-k8s-version/serial/SecondStart 59.43
348 TestStartStop/group/no-preload/serial/DeployApp 9.31
350 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
351 TestStartStop/group/no-preload/serial/Stop 12.12
352 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
353 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
356 TestStartStop/group/no-preload/serial/SecondStart 60.93
358 TestStartStop/group/embed-certs/serial/FirstStart 83.73
359 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
360 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
361 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
364 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.03
365 TestStartStop/group/embed-certs/serial/DeployApp 8.38
367 TestStartStop/group/embed-certs/serial/Stop 12.23
368 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
369 TestStartStop/group/embed-certs/serial/SecondStart 51.42
370 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
371 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
374 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.19
375 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
378 TestStartStop/group/newest-cni/serial/FirstStart 43.31
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.29
380 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 57.37
381 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/Stop 2.17
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
385 TestStartStop/group/newest-cni/serial/SecondStart 15.19
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
387 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
391 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
x
+
TestDownloadOnly/v1.28.0/json-events (5.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-447664 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-447664 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.858316042s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 08:57:25.045147  284904 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1123 08:57:25.045224  284904 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-447664
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-447664: exit status 85 (410.110497ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-447664 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-447664 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:57:19
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:57:19.232054  284910 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:57:19.232234  284910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:57:19.232267  284910 out.go:374] Setting ErrFile to fd 2...
	I1123 08:57:19.232289  284910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:57:19.232572  284910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	W1123 08:57:19.232745  284910 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21969-282998/.minikube/config/config.json: open /home/jenkins/minikube-integration/21969-282998/.minikube/config/config.json: no such file or directory
	I1123 08:57:19.233173  284910 out.go:368] Setting JSON to true
	I1123 08:57:19.234055  284910 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5988,"bootTime":1763882251,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:57:19.234155  284910 start.go:143] virtualization:  
	I1123 08:57:19.239881  284910 out.go:99] [download-only-447664] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1123 08:57:19.240080  284910 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 08:57:19.240155  284910 notify.go:221] Checking for updates...
	I1123 08:57:19.243306  284910 out.go:171] MINIKUBE_LOCATION=21969
	I1123 08:57:19.246544  284910 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:57:19.249623  284910 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 08:57:19.252628  284910 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 08:57:19.255689  284910 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1123 08:57:19.261660  284910 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 08:57:19.261915  284910 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:57:19.300054  284910 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:57:19.300160  284910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:57:19.358037  284910 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-23 08:57:19.348570286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:57:19.358142  284910 docker.go:319] overlay module found
	I1123 08:57:19.361271  284910 out.go:99] Using the docker driver based on user configuration
	I1123 08:57:19.361313  284910 start.go:309] selected driver: docker
	I1123 08:57:19.361321  284910 start.go:927] validating driver "docker" against <nil>
	I1123 08:57:19.361533  284910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:57:19.417369  284910 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-23 08:57:19.40856669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:57:19.417587  284910 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:57:19.417880  284910 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1123 08:57:19.418033  284910 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 08:57:19.421297  284910 out.go:171] Using Docker driver with root privileges
	I1123 08:57:19.424299  284910 cni.go:84] Creating CNI manager for ""
	I1123 08:57:19.424378  284910 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:57:19.424410  284910 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:57:19.424501  284910 start.go:353] cluster config:
	{Name:download-only-447664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-447664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:57:19.427605  284910 out.go:99] Starting "download-only-447664" primary control-plane node in "download-only-447664" cluster
	I1123 08:57:19.427627  284910 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:57:19.430580  284910 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:57:19.430624  284910 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:57:19.430768  284910 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:57:19.446395  284910 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:57:19.447221  284910 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 08:57:19.447335  284910 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:57:19.486922  284910 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 08:57:19.486961  284910 cache.go:65] Caching tarball of preloaded images
	I1123 08:57:19.487152  284910 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:57:19.490465  284910 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1123 08:57:19.490493  284910 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1123 08:57:19.579604  284910 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1123 08:57:19.579742  284910 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1123 08:57:23.290478  284910 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 08:57:23.290943  284910 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/download-only-447664/config.json ...
	I1123 08:57:23.291003  284910 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/download-only-447664/config.json: {Name:mke76b4b63700bb70a904f327734ba8ead43dc0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:23.291884  284910 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:57:23.292115  284910 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-447664 host does not exist
	  To start a cluster, run: "minikube start -p download-only-447664"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-447664
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-986034 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-986034 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.197770402s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 08:57:31.241107  284904 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1123 08:57:31.241139  284904 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-986034
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-986034: exit status 85 (83.541759ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-447664 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-447664 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ delete  │ -p download-only-447664                                                                                                                                                   │ download-only-447664 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │ 23 Nov 25 08:57 UTC │
	│ start   │ -o=json --download-only -p download-only-986034 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-986034 │ jenkins │ v1.37.0 │ 23 Nov 25 08:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:57:26
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:57:26.089956  285108 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:57:26.090156  285108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:57:26.090218  285108 out.go:374] Setting ErrFile to fd 2...
	I1123 08:57:26.090242  285108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:57:26.090552  285108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 08:57:26.091045  285108 out.go:368] Setting JSON to true
	I1123 08:57:26.092163  285108 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5995,"bootTime":1763882251,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 08:57:26.092269  285108 start.go:143] virtualization:  
	I1123 08:57:26.120375  285108 out.go:99] [download-only-986034] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:57:26.120677  285108 notify.go:221] Checking for updates...
	I1123 08:57:26.167827  285108 out.go:171] MINIKUBE_LOCATION=21969
	I1123 08:57:26.200747  285108 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:57:26.233279  285108 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 08:57:26.264763  285108 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 08:57:26.297702  285108 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1123 08:57:26.360513  285108 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 08:57:26.360909  285108 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:57:26.382430  285108 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:57:26.382544  285108 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:57:26.442022  285108 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-23 08:57:26.432636386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:57:26.442145  285108 docker.go:319] overlay module found
	I1123 08:57:26.446150  285108 out.go:99] Using the docker driver based on user configuration
	I1123 08:57:26.446209  285108 start.go:309] selected driver: docker
	I1123 08:57:26.446220  285108 start.go:927] validating driver "docker" against <nil>
	I1123 08:57:26.446340  285108 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:57:26.503139  285108 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-23 08:57:26.494153282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:57:26.503290  285108 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:57:26.503582  285108 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1123 08:57:26.503736  285108 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 08:57:26.508350  285108 out.go:171] Using Docker driver with root privileges
	I1123 08:57:26.512374  285108 cni.go:84] Creating CNI manager for ""
	I1123 08:57:26.512441  285108 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:57:26.512454  285108 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:57:26.512533  285108 start.go:353] cluster config:
	{Name:download-only-986034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-986034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:57:26.516509  285108 out.go:99] Starting "download-only-986034" primary control-plane node in "download-only-986034" cluster
	I1123 08:57:26.516532  285108 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:57:26.520357  285108 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:57:26.520395  285108 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:57:26.520446  285108 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:57:26.535672  285108 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:57:26.535826  285108 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 08:57:26.535848  285108 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 08:57:26.535853  285108 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 08:57:26.535859  285108 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 08:57:26.574445  285108 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 08:57:26.574477  285108 cache.go:65] Caching tarball of preloaded images
	I1123 08:57:26.574652  285108 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:57:26.578657  285108 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1123 08:57:26.578688  285108 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1123 08:57:26.669289  285108 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1123 08:57:26.669342  285108 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1123 08:57:30.468756  285108 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:57:30.469194  285108 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/download-only-986034/config.json ...
	I1123 08:57:30.469244  285108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/download-only-986034/config.json: {Name:mka0334f3f7ff2dda2bf45d2276696ef676f1c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:30.470194  285108 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:57:30.471011  285108 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21969-282998/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-986034 host does not exist
	  To start a cluster, run: "minikube start -p download-only-986034"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-986034
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 08:57:32.371602  284904 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-135438 --alsologtostderr --binary-mirror http://127.0.0.1:42341 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-135438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-135438
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-984173
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-984173: exit status 85 (71.960727ms)

                                                
                                                
-- stdout --
	* Profile "addons-984173" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-984173"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-984173
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-984173: exit status 85 (70.76695ms)

                                                
                                                
-- stdout --
	* Profile "addons-984173" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-984173"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (160.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-984173 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-984173 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m40.73249451s)
--- PASS: TestAddons/Setup (160.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-984173 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-984173 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-984173 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-984173 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a090ab12-7263-480e-a121-17363e4ce8a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a090ab12-7263-480e-a121-17363e4ce8a9] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003179837s
addons_test.go:694: (dbg) Run:  kubectl --context addons-984173 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-984173 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-984173 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-984173 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-984173
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-984173: (12.1528644s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-984173
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-984173
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-984173
--- PASS: TestAddons/StoppedEnableDisable (12.45s)

                                                
                                    
x
+
TestCertOptions (36.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-903768 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-903768 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.969737155s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-903768 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-903768 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-903768 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-903768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-903768
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-903768: (2.092026975s)
--- PASS: TestCertOptions (36.78s)

                                                
                                    
x
+
TestCertExpiration (242.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-350742 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-350742 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.145363768s)
E1123 09:54:57.977270  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:55:14.912473  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-350742 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-350742 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.374390933s)
helpers_test.go:175: Cleaning up "cert-expiration-350742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-350742
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-350742: (2.441267736s)
--- PASS: TestCertExpiration (242.96s)

                                                
                                    
x
+
TestForceSystemdFlag (45.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-692168 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-692168 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.032138579s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-692168 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-692168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-692168
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-692168: (3.013962576s)
--- PASS: TestForceSystemdFlag (45.44s)

                                                
                                    
x
+
TestForceSystemdEnv (40.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-653569 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-653569 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.863469929s)
helpers_test.go:175: Cleaning up "force-systemd-env-653569" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-653569
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-653569: (2.777307506s)
--- PASS: TestForceSystemdEnv (40.64s)

                                                
                                    
x
+
TestErrorSpam/setup (33.21s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-377073 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-377073 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-377073 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-377073 --driver=docker  --container-runtime=crio: (33.207580639s)
--- PASS: TestErrorSpam/setup (33.21s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (5.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 pause: exit status 80 (1.735732544s)

                                                
                                                
-- stdout --
	* Pausing node nospam-377073 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:04:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 pause: exit status 80 (1.695789519s)

                                                
                                                
-- stdout --
	* Pausing node nospam-377073 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:04:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 pause: exit status 80 (2.3735669s)

                                                
                                                
-- stdout --
	* Pausing node nospam-377073 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:04:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.96s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 unpause: exit status 80 (1.982858144s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-377073 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:04:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 unpause: exit status 80 (1.849819427s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-377073 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:04:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 unpause: exit status 80 (2.121752995s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-377073 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:04:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.96s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 stop: (1.320382534s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-377073 --log_dir /tmp/nospam-377073 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21969-282998/.minikube/files/etc/test/nested/copy/284904/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-605613 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1123 09:05:14.909572  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:05:14.916106  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:05:14.927595  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:05:14.949092  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:05:14.990481  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:05:15.071953  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:05:15.233516  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:05:15.555255  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:05:16.197269  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:05:17.478834  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:05:20.040743  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:05:25.162522  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:05:35.404121  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-605613 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m18.651173775s)
--- PASS: TestFunctional/serial/StartWithProxy (78.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 09:05:53.989087  284904 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-605613 --alsologtostderr -v=8
E1123 09:05:55.885894  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-605613 --alsologtostderr -v=8: (40.580685943s)
functional_test.go:678: soft start took 40.581201977s for "functional-605613" cluster.
I1123 09:06:34.570066  284904 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (40.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-605613 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-605613 cache add registry.k8s.io/pause:3.1: (1.184258193s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 cache add registry.k8s.io/pause:3.3
E1123 09:06:36.847191  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-605613 cache add registry.k8s.io/pause:3.3: (1.29539757s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-605613 cache add registry.k8s.io/pause:latest: (1.133920886s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-605613 /tmp/TestFunctionalserialCacheCmdcacheadd_local4021330956/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 cache add minikube-local-cache-test:functional-605613
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 cache delete minikube-local-cache-test:functional-605613
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-605613
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-605613 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (281.53251ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 kubectl -- --context functional-605613 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-605613 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.85s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-605613 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-605613 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.852226721s)
functional_test.go:776: restart took 37.852340625s for "functional-605613" cluster.
I1123 09:07:19.944756  284904 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (37.85s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-605613 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-605613 logs: (1.461591195s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 logs --file /tmp/TestFunctionalserialLogsFileCmd46963794/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-605613 logs --file /tmp/TestFunctionalserialLogsFileCmd46963794/001/logs.txt: (1.507862293s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-605613 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-605613
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-605613: exit status 115 (365.596579ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30311 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-605613 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-605613 config get cpus: exit status 14 (87.981781ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-605613 config get cpus: exit status 14 (85.524546ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-605613 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-605613 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 312314: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-605613 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-605613 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (193.436431ms)

                                                
                                                
-- stdout --
	* [functional-605613] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:17:58.670434  312061 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:17:58.670690  312061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:17:58.670720  312061 out.go:374] Setting ErrFile to fd 2...
	I1123 09:17:58.670739  312061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:17:58.671043  312061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:17:58.671447  312061 out.go:368] Setting JSON to false
	I1123 09:17:58.672390  312061 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7228,"bootTime":1763882251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 09:17:58.672493  312061 start.go:143] virtualization:  
	I1123 09:17:58.675647  312061 out.go:179] * [functional-605613] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:17:58.679284  312061 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:17:58.679421  312061 notify.go:221] Checking for updates...
	I1123 09:17:58.685141  312061 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:17:58.688125  312061 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:17:58.691046  312061 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 09:17:58.694352  312061 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:17:58.697177  312061 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:17:58.700466  312061 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:17:58.701123  312061 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:17:58.735595  312061 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:17:58.735771  312061 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:17:58.795110  312061 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 09:17:58.785347684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:17:58.795219  312061 docker.go:319] overlay module found
	I1123 09:17:58.798429  312061 out.go:179] * Using the docker driver based on existing profile
	I1123 09:17:58.801259  312061 start.go:309] selected driver: docker
	I1123 09:17:58.801277  312061 start.go:927] validating driver "docker" against &{Name:functional-605613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-605613 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:17:58.801382  312061 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:17:58.804867  312061 out.go:203] 
	W1123 09:17:58.807641  312061 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 09:17:58.810688  312061 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-605613 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-605613 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-605613 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (203.845956ms)

                                                
                                                
-- stdout --
	* [functional-605613] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:17:58.472036  312014 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:17:58.472193  312014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:17:58.472221  312014 out.go:374] Setting ErrFile to fd 2...
	I1123 09:17:58.472228  312014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:17:58.472619  312014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:17:58.473032  312014 out.go:368] Setting JSON to false
	I1123 09:17:58.474003  312014 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7227,"bootTime":1763882251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 09:17:58.474077  312014 start.go:143] virtualization:  
	I1123 09:17:58.477760  312014 out.go:179] * [functional-605613] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1123 09:17:58.480797  312014 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:17:58.480876  312014 notify.go:221] Checking for updates...
	I1123 09:17:58.487137  312014 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:17:58.490063  312014 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:17:58.493026  312014 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 09:17:58.495845  312014 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:17:58.498822  312014 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:17:58.502301  312014 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:17:58.502881  312014 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:17:58.537184  312014 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:17:58.537307  312014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:17:58.595917  312014 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 09:17:58.586337801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:17:58.596021  312014 docker.go:319] overlay module found
	I1123 09:17:58.599104  312014 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1123 09:17:58.601854  312014 start.go:309] selected driver: docker
	I1123 09:17:58.601884  312014 start.go:927] validating driver "docker" against &{Name:functional-605613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-605613 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:17:58.601995  312014 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:17:58.605357  312014 out.go:203] 
	W1123 09:17:58.608257  312014 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 09:17:58.615838  312014 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [46d9d073-fcf0-4a27-bc1e-bad777af7399] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003930114s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-605613 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-605613 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-605613 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-605613 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [825c1de2-7d41-4f9b-8feb-609ae42027c2] Pending
helpers_test.go:352: "sp-pod" [825c1de2-7d41-4f9b-8feb-609ae42027c2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [825c1de2-7d41-4f9b-8feb-609ae42027c2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003649599s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-605613 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-605613 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-605613 delete -f testdata/storage-provisioner/pod.yaml: (1.089632188s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-605613 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [225f0ee6-7202-4be3-bedc-82f32acb1101] Pending
helpers_test.go:352: "sp-pod" [225f0ee6-7202-4be3-bedc-82f32acb1101] Running
E1123 09:07:58.769221  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004442763s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-605613 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh -n functional-605613 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 cp functional-605613:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1673992194/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh -n functional-605613 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh -n functional-605613 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/284904/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "sudo cat /etc/test/nested/copy/284904/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/284904.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "sudo cat /etc/ssl/certs/284904.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/284904.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "sudo cat /usr/share/ca-certificates/284904.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2849042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "sudo cat /etc/ssl/certs/2849042.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2849042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "sudo cat /usr/share/ca-certificates/2849042.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-605613 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-605613 ssh "sudo systemctl is-active docker": exit status 1 (335.865835ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-605613 ssh "sudo systemctl is-active containerd": exit status 1 (336.710512ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-605613 version -o=json --components: (1.195682073s)
--- PASS: TestFunctional/parallel/Version/components (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-605613 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-605613 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-605613 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-605613 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 307593: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-605613 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-605613 image ls --format short --alsologtostderr:
I1123 09:18:08.840290  313402 out.go:360] Setting OutFile to fd 1 ...
I1123 09:18:08.840395  313402 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:18:08.840403  313402 out.go:374] Setting ErrFile to fd 2...
I1123 09:18:08.840407  313402 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:18:08.840794  313402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
I1123 09:18:08.841817  313402 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:18:08.841954  313402 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:18:08.842496  313402 cli_runner.go:164] Run: docker container inspect functional-605613 --format={{.State.Status}}
I1123 09:18:08.887854  313402 ssh_runner.go:195] Run: systemctl --version
I1123 09:18:08.887909  313402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
I1123 09:18:08.918749  313402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/functional-605613/id_rsa Username:docker}
I1123 09:18:09.036530  313402 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-605613 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-605613 image ls --format table --alsologtostderr:
I1123 09:18:09.904925  313692 out.go:360] Setting OutFile to fd 1 ...
I1123 09:18:09.905050  313692 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:18:09.905059  313692 out.go:374] Setting ErrFile to fd 2...
I1123 09:18:09.905065  313692 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:18:09.905458  313692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
I1123 09:18:09.906384  313692 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:18:09.906505  313692 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:18:09.907045  313692 cli_runner.go:164] Run: docker container inspect functional-605613 --format={{.State.Status}}
I1123 09:18:09.931023  313692 ssh_runner.go:195] Run: systemctl --version
I1123 09:18:09.931080  313692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
I1123 09:18:09.960385  313692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/functional-605613/id_rsa Username:docker}
I1123 09:18:10.076531  313692 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-605613 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags"
:["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2
e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","regist
ry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler
@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5
adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-605613 image ls --format json --alsologtostderr:
I1123 09:18:09.649150  313622 out.go:360] Setting OutFile to fd 1 ...
I1123 09:18:09.649671  313622 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:18:09.653583  313622 out.go:374] Setting ErrFile to fd 2...
I1123 09:18:09.653644  313622 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:18:09.654058  313622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
I1123 09:18:09.655075  313622 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:18:09.655276  313622 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:18:09.656089  313622 cli_runner.go:164] Run: docker container inspect functional-605613 --format={{.State.Status}}
I1123 09:18:09.672783  313622 ssh_runner.go:195] Run: systemctl --version
I1123 09:18:09.672845  313622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
I1123 09:18:09.689485  313622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/functional-605613/id_rsa Username:docker}
I1123 09:18:09.796050  313622 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-605613 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-605613 image ls --format yaml --alsologtostderr:
I1123 09:18:09.134079  313474 out.go:360] Setting OutFile to fd 1 ...
I1123 09:18:09.134285  313474 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:18:09.134313  313474 out.go:374] Setting ErrFile to fd 2...
I1123 09:18:09.134331  313474 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:18:09.134684  313474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
I1123 09:18:09.135414  313474 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:18:09.135590  313474 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:18:09.136200  313474 cli_runner.go:164] Run: docker container inspect functional-605613 --format={{.State.Status}}
I1123 09:18:09.159954  313474 ssh_runner.go:195] Run: systemctl --version
I1123 09:18:09.160015  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
I1123 09:18:09.181734  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/functional-605613/id_rsa Username:docker}
I1123 09:18:09.303994  313474 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-605613 ssh pgrep buildkitd: exit status 1 (365.223535ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image build -t localhost/my-image:functional-605613 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-605613 image build -t localhost/my-image:functional-605613 testdata/build --alsologtostderr: (3.516651256s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-605613 image build -t localhost/my-image:functional-605613 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ea37b4b0475
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-605613
--> b54fd11cada
Successfully tagged localhost/my-image:functional-605613
b54fd11cadac45cd4c6a1031a556797f60b744c8685e3b97698581f40b68a702
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-605613 image build -t localhost/my-image:functional-605613 testdata/build --alsologtostderr:
I1123 09:18:09.769652  313655 out.go:360] Setting OutFile to fd 1 ...
I1123 09:18:09.770453  313655 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:18:09.770470  313655 out.go:374] Setting ErrFile to fd 2...
I1123 09:18:09.770477  313655 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 09:18:09.770744  313655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
I1123 09:18:09.771361  313655 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:18:09.771957  313655 config.go:182] Loaded profile config "functional-605613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 09:18:09.772492  313655 cli_runner.go:164] Run: docker container inspect functional-605613 --format={{.State.Status}}
I1123 09:18:09.789482  313655 ssh_runner.go:195] Run: systemctl --version
I1123 09:18:09.789918  313655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-605613
I1123 09:18:09.808966  313655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/functional-605613/id_rsa Username:docker}
I1123 09:18:09.929748  313655 build_images.go:162] Building image from path: /tmp/build.4092902414.tar
I1123 09:18:09.929828  313655 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 09:18:09.950563  313655 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4092902414.tar
I1123 09:18:09.956873  313655 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4092902414.tar: stat -c "%s %y" /var/lib/minikube/build/build.4092902414.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4092902414.tar': No such file or directory
I1123 09:18:09.956901  313655 ssh_runner.go:362] scp /tmp/build.4092902414.tar --> /var/lib/minikube/build/build.4092902414.tar (3072 bytes)
I1123 09:18:09.992210  313655 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4092902414
I1123 09:18:10.011183  313655 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4092902414 -xf /var/lib/minikube/build/build.4092902414.tar
I1123 09:18:10.024463  313655 crio.go:315] Building image: /var/lib/minikube/build/build.4092902414
I1123 09:18:10.024676  313655 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-605613 /var/lib/minikube/build/build.4092902414 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1123 09:18:13.205386  313655 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-605613 /var/lib/minikube/build/build.4092902414 --cgroup-manager=cgroupfs: (3.180683973s)
I1123 09:18:13.205519  313655 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4092902414
I1123 09:18:13.213002  313655 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4092902414.tar
I1123 09:18:13.220380  313655 build_images.go:218] Built localhost/my-image:functional-605613 from /tmp/build.4092902414.tar
I1123 09:18:13.220412  313655 build_images.go:134] succeeded building to: functional-605613
I1123 09:18:13.220418  313655 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-605613
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-605613 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-605613 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [b13af4c2-0fb3-4d34-9347-ae5464d3a9af] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [b13af4c2-0fb3-4d34-9347-ae5464d3a9af] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003190612s
I1123 09:07:39.286227  284904 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image rm kicbase/echo-server:functional-605613 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-605613 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.162.187 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-605613 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "382.005162ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "67.16805ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "349.881006ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.827867ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-605613 /tmp/TestFunctionalparallelMountCmdany-port3173332009/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763889466301353757" to /tmp/TestFunctionalparallelMountCmdany-port3173332009/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763889466301353757" to /tmp/TestFunctionalparallelMountCmdany-port3173332009/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763889466301353757" to /tmp/TestFunctionalparallelMountCmdany-port3173332009/001/test-1763889466301353757
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-605613 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (330.872601ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 09:17:46.632509  284904 retry.go:31] will retry after 434.721345ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 09:17 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 09:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 09:17 test-1763889466301353757
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh cat /mount-9p/test-1763889466301353757
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-605613 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [bb097267-7c1d-4196-8b55-51945d59a4e8] Pending
helpers_test.go:352: "busybox-mount" [bb097267-7c1d-4196-8b55-51945d59a4e8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [bb097267-7c1d-4196-8b55-51945d59a4e8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [bb097267-7c1d-4196-8b55-51945d59a4e8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003859383s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-605613 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-605613 /tmp/TestFunctionalparallelMountCmdany-port3173332009/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-605613 /tmp/TestFunctionalparallelMountCmdspecific-port1530850450/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-605613 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (326.310295ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 09:17:53.472735  284904 retry.go:31] will retry after 544.241538ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-605613 /tmp/TestFunctionalparallelMountCmdspecific-port1530850450/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-605613 ssh "sudo umount -f /mount-9p": exit status 1 (277.169133ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-605613 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-605613 /tmp/TestFunctionalparallelMountCmdspecific-port1530850450/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-605613 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3341819814/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-605613 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3341819814/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-605613 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3341819814/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-605613 ssh "findmnt -T" /mount1: exit status 1 (622.259192ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 09:17:55.699319  284904 retry.go:31] will retry after 494.620605ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-605613 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-605613 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3341819814/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-605613 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3341819814/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-605613 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3341819814/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-605613 service list: (1.411775566s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-605613 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-605613 service list -o json: (1.415814747s)
functional_test.go:1504: Took "1.415901148s" to run "out/minikube-linux-arm64 -p functional-605613 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.42s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-605613
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-605613
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-605613
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (177.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1123 09:20:14.909869  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m57.103473353s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (177.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 kubectl -- rollout status deployment/busybox: (3.968906957s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-jr7sx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-ltgrn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-xdt5w -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-jr7sx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-ltgrn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-xdt5w -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-jr7sx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-ltgrn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-xdt5w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-jr7sx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-jr7sx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-ltgrn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-ltgrn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-xdt5w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 kubectl -- exec busybox-7b57f96db7-xdt5w -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 node add --alsologtostderr -v 5
E1123 09:21:37.972696  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 node add --alsologtostderr -v 5: (58.772166821s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 status --alsologtostderr -v 5: (1.088022255s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-857095 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.114486031s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 status --output json --alsologtostderr -v 5: (1.072204065s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp testdata/cp-test.txt ha-857095:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1815903833/001/cp-test_ha-857095.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095:/home/docker/cp-test.txt ha-857095-m02:/home/docker/cp-test_ha-857095_ha-857095-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m02 "sudo cat /home/docker/cp-test_ha-857095_ha-857095-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095:/home/docker/cp-test.txt ha-857095-m03:/home/docker/cp-test_ha-857095_ha-857095-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m03 "sudo cat /home/docker/cp-test_ha-857095_ha-857095-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095:/home/docker/cp-test.txt ha-857095-m04:/home/docker/cp-test_ha-857095_ha-857095-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m04 "sudo cat /home/docker/cp-test_ha-857095_ha-857095-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp testdata/cp-test.txt ha-857095-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m02 "sudo cat /home/docker/cp-test.txt"
E1123 09:22:29.809863  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:22:29.816392  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:22:29.829705  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:22:29.851997  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:22:29.893681  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:22:29.975072  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1815903833/001/cp-test_ha-857095-m02.txt
E1123 09:22:30.136582  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m02 "sudo cat /home/docker/cp-test.txt"
E1123 09:22:30.460635  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095-m02:/home/docker/cp-test.txt ha-857095:/home/docker/cp-test_ha-857095-m02_ha-857095.txt
E1123 09:22:31.102814  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095 "sudo cat /home/docker/cp-test_ha-857095-m02_ha-857095.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095-m02:/home/docker/cp-test.txt ha-857095-m03:/home/docker/cp-test_ha-857095-m02_ha-857095-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m02 "sudo cat /home/docker/cp-test.txt"
E1123 09:22:32.385056  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m03 "sudo cat /home/docker/cp-test_ha-857095-m02_ha-857095-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095-m02:/home/docker/cp-test.txt ha-857095-m04:/home/docker/cp-test_ha-857095-m02_ha-857095-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m04 "sudo cat /home/docker/cp-test_ha-857095-m02_ha-857095-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp testdata/cp-test.txt ha-857095-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1815903833/001/cp-test_ha-857095-m03.txt
E1123 09:22:34.946756  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095-m03:/home/docker/cp-test.txt ha-857095:/home/docker/cp-test_ha-857095-m03_ha-857095.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095 "sudo cat /home/docker/cp-test_ha-857095-m03_ha-857095.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095-m03:/home/docker/cp-test.txt ha-857095-m02:/home/docker/cp-test_ha-857095-m03_ha-857095-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m02 "sudo cat /home/docker/cp-test_ha-857095-m03_ha-857095-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095-m03:/home/docker/cp-test.txt ha-857095-m04:/home/docker/cp-test_ha-857095-m03_ha-857095-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m04 "sudo cat /home/docker/cp-test_ha-857095-m03_ha-857095-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp testdata/cp-test.txt ha-857095-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1815903833/001/cp-test_ha-857095-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m04 "sudo cat /home/docker/cp-test.txt"
E1123 09:22:40.068399  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095-m04:/home/docker/cp-test.txt ha-857095:/home/docker/cp-test_ha-857095-m04_ha-857095.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095 "sudo cat /home/docker/cp-test_ha-857095-m04_ha-857095.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095-m04:/home/docker/cp-test.txt ha-857095-m02:/home/docker/cp-test_ha-857095-m04_ha-857095-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m02 "sudo cat /home/docker/cp-test_ha-857095-m04_ha-857095-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 cp ha-857095-m04:/home/docker/cp-test.txt ha-857095-m03:/home/docker/cp-test_ha-857095-m04_ha-857095-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 ssh -n ha-857095-m03 "sudo cat /home/docker/cp-test_ha-857095-m04_ha-857095-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 node stop m02 --alsologtostderr -v 5
E1123 09:22:50.311846  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 node stop m02 --alsologtostderr -v 5: (12.215394692s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-857095 status --alsologtostderr -v 5: exit status 7 (816.590745ms)

                                                
                                                
-- stdout --
	ha-857095
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-857095-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-857095-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-857095-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:22:55.933701  328591 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:22:55.933889  328591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:22:55.933915  328591 out.go:374] Setting ErrFile to fd 2...
	I1123 09:22:55.933934  328591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:22:55.934210  328591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:22:55.934423  328591 out.go:368] Setting JSON to false
	I1123 09:22:55.934477  328591 mustload.go:66] Loading cluster: ha-857095
	I1123 09:22:55.934690  328591 notify.go:221] Checking for updates...
	I1123 09:22:55.935024  328591 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:22:55.935064  328591 status.go:174] checking status of ha-857095 ...
	I1123 09:22:55.935653  328591 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:22:55.959773  328591 status.go:371] ha-857095 host status = "Running" (err=<nil>)
	I1123 09:22:55.959796  328591 host.go:66] Checking if "ha-857095" exists ...
	I1123 09:22:55.960110  328591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095
	I1123 09:22:55.989981  328591 host.go:66] Checking if "ha-857095" exists ...
	I1123 09:22:55.990270  328591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:22:55.990311  328591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095
	I1123 09:22:56.013895  328591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095/id_rsa Username:docker}
	I1123 09:22:56.123019  328591 ssh_runner.go:195] Run: systemctl --version
	I1123 09:22:56.130125  328591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:22:56.143573  328591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:22:56.216716  328591 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-23 09:22:56.207152117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:22:56.217311  328591 kubeconfig.go:125] found "ha-857095" server: "https://192.168.49.254:8443"
	I1123 09:22:56.217347  328591 api_server.go:166] Checking apiserver status ...
	I1123 09:22:56.217391  328591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:22:56.228940  328591 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup
	I1123 09:22:56.242514  328591 api_server.go:182] apiserver freezer: "10:freezer:/docker/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/crio/crio-8a21f9862e5e314e8c1d73b1337b650a3ec2db579476415f543cf51517f0e49c"
	I1123 09:22:56.242593  328591 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8497a55e0a4e2653184706e3a18829d1eeae0bb07739dd6177081f03188fc8c8/crio/crio-8a21f9862e5e314e8c1d73b1337b650a3ec2db579476415f543cf51517f0e49c/freezer.state
	I1123 09:22:56.251080  328591 api_server.go:204] freezer state: "THAWED"
	I1123 09:22:56.251114  328591 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 09:22:56.259293  328591 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 09:22:56.259321  328591 status.go:463] ha-857095 apiserver status = Running (err=<nil>)
	I1123 09:22:56.259331  328591 status.go:176] ha-857095 status: &{Name:ha-857095 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:22:56.259347  328591 status.go:174] checking status of ha-857095-m02 ...
	I1123 09:22:56.259662  328591 cli_runner.go:164] Run: docker container inspect ha-857095-m02 --format={{.State.Status}}
	I1123 09:22:56.277094  328591 status.go:371] ha-857095-m02 host status = "Stopped" (err=<nil>)
	I1123 09:22:56.277117  328591 status.go:384] host is not running, skipping remaining checks
	I1123 09:22:56.277125  328591 status.go:176] ha-857095-m02 status: &{Name:ha-857095-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:22:56.277144  328591 status.go:174] checking status of ha-857095-m03 ...
	I1123 09:22:56.277514  328591 cli_runner.go:164] Run: docker container inspect ha-857095-m03 --format={{.State.Status}}
	I1123 09:22:56.293754  328591 status.go:371] ha-857095-m03 host status = "Running" (err=<nil>)
	I1123 09:22:56.293781  328591 host.go:66] Checking if "ha-857095-m03" exists ...
	I1123 09:22:56.294078  328591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m03
	I1123 09:22:56.311623  328591 host.go:66] Checking if "ha-857095-m03" exists ...
	I1123 09:22:56.311935  328591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:22:56.311989  328591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m03
	I1123 09:22:56.329770  328591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m03/id_rsa Username:docker}
	I1123 09:22:56.433945  328591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:22:56.452748  328591 kubeconfig.go:125] found "ha-857095" server: "https://192.168.49.254:8443"
	I1123 09:22:56.452783  328591 api_server.go:166] Checking apiserver status ...
	I1123 09:22:56.452847  328591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:22:56.465081  328591 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup
	I1123 09:22:56.473389  328591 api_server.go:182] apiserver freezer: "10:freezer:/docker/9ddbabadb8c1398482397da28992a8b2b392dc9bbc9b382091698e2e999bedd4/crio/crio-8731b825c97437f652282fd30bef13e6a3c1ca6fc6122b8849a5903903a414ec"
	I1123 09:22:56.473563  328591 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9ddbabadb8c1398482397da28992a8b2b392dc9bbc9b382091698e2e999bedd4/crio/crio-8731b825c97437f652282fd30bef13e6a3c1ca6fc6122b8849a5903903a414ec/freezer.state
	I1123 09:22:56.482261  328591 api_server.go:204] freezer state: "THAWED"
	I1123 09:22:56.482288  328591 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 09:22:56.490770  328591 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 09:22:56.490803  328591 status.go:463] ha-857095-m03 apiserver status = Running (err=<nil>)
	I1123 09:22:56.490824  328591 status.go:176] ha-857095-m03 status: &{Name:ha-857095-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:22:56.490845  328591 status.go:174] checking status of ha-857095-m04 ...
	I1123 09:22:56.491157  328591 cli_runner.go:164] Run: docker container inspect ha-857095-m04 --format={{.State.Status}}
	I1123 09:22:56.508067  328591 status.go:371] ha-857095-m04 host status = "Running" (err=<nil>)
	I1123 09:22:56.508092  328591 host.go:66] Checking if "ha-857095-m04" exists ...
	I1123 09:22:56.508403  328591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-857095-m04
	I1123 09:22:56.525124  328591 host.go:66] Checking if "ha-857095-m04" exists ...
	I1123 09:22:56.525542  328591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:22:56.525603  328591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-857095-m04
	I1123 09:22:56.544177  328591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33172 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/ha-857095-m04/id_rsa Username:docker}
	I1123 09:22:56.654796  328591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:22:56.667471  328591 status.go:176] ha-857095-m04 status: &{Name:ha-857095-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 node start m02 --alsologtostderr -v 5
E1123 09:23:10.793291  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 node start m02 --alsologtostderr -v 5: (27.782832958s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 status --alsologtostderr -v 5: (1.493828867s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.445209011s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 node delete m03 --alsologtostderr -v 5: (10.612982781s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 stop --alsologtostderr -v 5: (36.03519852s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-857095 status --alsologtostderr -v 5: exit status 7 (116.356025ms)

                                                
                                                
-- stdout --
	ha-857095
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-857095-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-857095-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:31:12.619107  341564 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:31:12.619242  341564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:31:12.619253  341564 out.go:374] Setting ErrFile to fd 2...
	I1123 09:31:12.619258  341564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:31:12.619512  341564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:31:12.619702  341564 out.go:368] Setting JSON to false
	I1123 09:31:12.619735  341564 mustload.go:66] Loading cluster: ha-857095
	I1123 09:31:12.619785  341564 notify.go:221] Checking for updates...
	I1123 09:31:12.620178  341564 config.go:182] Loaded profile config "ha-857095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:31:12.620197  341564 status.go:174] checking status of ha-857095 ...
	I1123 09:31:12.620740  341564 cli_runner.go:164] Run: docker container inspect ha-857095 --format={{.State.Status}}
	I1123 09:31:12.642710  341564 status.go:371] ha-857095 host status = "Stopped" (err=<nil>)
	I1123 09:31:12.642731  341564 status.go:384] host is not running, skipping remaining checks
	I1123 09:31:12.642738  341564 status.go:176] ha-857095 status: &{Name:ha-857095 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:31:12.642765  341564 status.go:174] checking status of ha-857095-m02 ...
	I1123 09:31:12.643086  341564 cli_runner.go:164] Run: docker container inspect ha-857095-m02 --format={{.State.Status}}
	I1123 09:31:12.669580  341564 status.go:371] ha-857095-m02 host status = "Stopped" (err=<nil>)
	I1123 09:31:12.669601  341564 status.go:384] host is not running, skipping remaining checks
	I1123 09:31:12.669608  341564 status.go:176] ha-857095-m02 status: &{Name:ha-857095-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:31:12.669628  341564 status.go:174] checking status of ha-857095-m04 ...
	I1123 09:31:12.669917  341564 cli_runner.go:164] Run: docker container inspect ha-857095-m04 --format={{.State.Status}}
	I1123 09:31:12.686734  341564 status.go:371] ha-857095-m04 host status = "Stopped" (err=<nil>)
	I1123 09:31:12.686755  341564 status.go:384] host is not running, skipping remaining checks
	I1123 09:31:12.686762  341564 status.go:176] ha-857095-m04 status: &{Name:ha-857095-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (85.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1123 09:32:29.809575  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m25.012149141s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (85.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 node add --control-plane --alsologtostderr -v 5: (1m19.449823848s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-857095 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-857095 status --alsologtostderr -v 5: (1.046769215s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.446380822s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.45s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-608547 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1123 09:35:14.910191  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-608547 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m21.965059362s)
--- PASS: TestJSONOutput/start/Command (81.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-608547 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-608547 --output=json --user=testUser: (5.862292856s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-582854 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-582854 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (91.861857ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e10a15e2-e5fe-4b4c-b310-e5bd08cf01d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-582854] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c38dc5d6-976d-47a2-8ec8-13abe1005b91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21969"}}
	{"specversion":"1.0","id":"eef7e832-dc46-49a8-97a4-7669dba5923b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"65f42095-bcd7-4c52-80b1-ebc95bdcc7ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig"}}
	{"specversion":"1.0","id":"99a39cda-0a3a-4946-90b0-93a6d71396df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube"}}
	{"specversion":"1.0","id":"c4f4361e-67a4-4d6f-b42f-8b93e05274dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"60e31b46-45e2-4f16-93e6-b5eed12e7151","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"edad3e33-910f-44ce-bc74-7a71e4f3e024","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-582854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-582854
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-520999 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-520999 --network=: (39.428842822s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-520999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-520999
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-520999: (2.250958447s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-245518 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-245518 --network=bridge: (32.750101247s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-245518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-245518
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-245518: (2.074485018s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.86s)

                                                
                                    
x
+
TestKicExistingNetwork (37.08s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1123 09:37:06.416734  284904 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1123 09:37:06.432632  284904 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1123 09:37:06.432711  284904 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1123 09:37:06.432728  284904 cli_runner.go:164] Run: docker network inspect existing-network
W1123 09:37:06.456420  284904 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1123 09:37:06.456452  284904 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1123 09:37:06.456469  284904 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1123 09:37:06.456574  284904 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1123 09:37:06.481810  284904 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d56166f18c3a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0e:f2:0f:1a:18:9c} reservation:<nil>}
I1123 09:37:06.482180  284904 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40003d59b0}
I1123 09:37:06.482219  284904 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1123 09:37:06.482272  284904 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1123 09:37:06.541300  284904 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-323249 --network=existing-network
E1123 09:37:29.809605  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-323249 --network=existing-network: (34.882815087s)
helpers_test.go:175: Cleaning up "existing-network-323249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-323249
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-323249: (2.03511866s)
I1123 09:37:43.476236  284904 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.08s)

                                                
                                    
x
+
TestKicCustomSubnet (37.54s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-601523 --subnet=192.168.60.0/24
E1123 09:38:17.974640  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-601523 --subnet=192.168.60.0/24: (35.300899813s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-601523 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-601523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-601523
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-601523: (2.210581656s)
--- PASS: TestKicCustomSubnet (37.54s)

                                                
                                    
x
+
TestKicStaticIP (40.03s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-185036 --static-ip=192.168.200.200
E1123 09:38:52.881541  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-185036 --static-ip=192.168.200.200: (37.595326843s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-185036 ip
helpers_test.go:175: Cleaning up "static-ip-185036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-185036
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-185036: (2.294960238s)
--- PASS: TestKicStaticIP (40.03s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-077766 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-077766 --driver=docker  --container-runtime=crio: (33.942431253s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-080556 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-080556 --driver=docker  --container-runtime=crio: (33.781446137s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-077766
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-080556
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-080556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-080556
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-080556: (2.160198743s)
helpers_test.go:175: Cleaning up "first-077766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-077766
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-077766: (2.084017175s)
--- PASS: TestMinikubeProfile (73.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-256819 --memory=3072 --mount-string /tmp/TestMountStartserial2865379146/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1123 09:40:14.910059  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-256819 --memory=3072 --mount-string /tmp/TestMountStartserial2865379146/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.896529151s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-256819 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-258583 --memory=3072 --mount-string /tmp/TestMountStartserial2865379146/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-258583 --memory=3072 --mount-string /tmp/TestMountStartserial2865379146/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.029802072s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-258583 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-256819 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-256819 --alsologtostderr -v=5: (1.704267169s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-258583 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-258583
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-258583: (1.309667583s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-258583
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-258583: (6.788401797s)
--- PASS: TestMountStart/serial/RestartStopped (7.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-258583 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-106482 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1123 09:42:29.809227  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-106482 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m16.355626822s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-106482 -- rollout status deployment/busybox: (3.607575907s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- exec busybox-7b57f96db7-27r9g -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- exec busybox-7b57f96db7-68lm5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- exec busybox-7b57f96db7-27r9g -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- exec busybox-7b57f96db7-68lm5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- exec busybox-7b57f96db7-27r9g -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- exec busybox-7b57f96db7-68lm5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- exec busybox-7b57f96db7-27r9g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- exec busybox-7b57f96db7-27r9g -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- exec busybox-7b57f96db7-68lm5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-106482 -- exec busybox-7b57f96db7-68lm5 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-106482 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-106482 -v=5 --alsologtostderr: (56.902619984s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.61s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-106482 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 cp testdata/cp-test.txt multinode-106482:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 cp multinode-106482:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2015372359/001/cp-test_multinode-106482.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 cp multinode-106482:/home/docker/cp-test.txt multinode-106482-m02:/home/docker/cp-test_multinode-106482_multinode-106482-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482-m02 "sudo cat /home/docker/cp-test_multinode-106482_multinode-106482-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 cp multinode-106482:/home/docker/cp-test.txt multinode-106482-m03:/home/docker/cp-test_multinode-106482_multinode-106482-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482-m03 "sudo cat /home/docker/cp-test_multinode-106482_multinode-106482-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 cp testdata/cp-test.txt multinode-106482-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 cp multinode-106482-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2015372359/001/cp-test_multinode-106482-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 cp multinode-106482-m02:/home/docker/cp-test.txt multinode-106482:/home/docker/cp-test_multinode-106482-m02_multinode-106482.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482 "sudo cat /home/docker/cp-test_multinode-106482-m02_multinode-106482.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 cp multinode-106482-m02:/home/docker/cp-test.txt multinode-106482-m03:/home/docker/cp-test_multinode-106482-m02_multinode-106482-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482-m03 "sudo cat /home/docker/cp-test_multinode-106482-m02_multinode-106482-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 cp testdata/cp-test.txt multinode-106482-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 cp multinode-106482-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2015372359/001/cp-test_multinode-106482-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 cp multinode-106482-m03:/home/docker/cp-test.txt multinode-106482:/home/docker/cp-test_multinode-106482-m03_multinode-106482.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482 "sudo cat /home/docker/cp-test_multinode-106482-m03_multinode-106482.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 cp multinode-106482-m03:/home/docker/cp-test.txt multinode-106482-m02:/home/docker/cp-test_multinode-106482-m03_multinode-106482-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 ssh -n multinode-106482-m02 "sudo cat /home/docker/cp-test_multinode-106482-m03_multinode-106482-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-106482 node stop m03: (1.336191853s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-106482 status: exit status 7 (548.028889ms)

                                                
                                                
-- stdout --
	multinode-106482
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-106482-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-106482-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-106482 status --alsologtostderr: exit status 7 (560.245017ms)

                                                
                                                
-- stdout --
	multinode-106482
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-106482-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-106482-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:44:20.420374  392318 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:44:20.420537  392318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:44:20.420551  392318 out.go:374] Setting ErrFile to fd 2...
	I1123 09:44:20.420556  392318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:44:20.420820  392318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:44:20.420998  392318 out.go:368] Setting JSON to false
	I1123 09:44:20.421032  392318 mustload.go:66] Loading cluster: multinode-106482
	I1123 09:44:20.421092  392318 notify.go:221] Checking for updates...
	I1123 09:44:20.421537  392318 config.go:182] Loaded profile config "multinode-106482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:44:20.421559  392318 status.go:174] checking status of multinode-106482 ...
	I1123 09:44:20.422436  392318 cli_runner.go:164] Run: docker container inspect multinode-106482 --format={{.State.Status}}
	I1123 09:44:20.441808  392318 status.go:371] multinode-106482 host status = "Running" (err=<nil>)
	I1123 09:44:20.441838  392318 host.go:66] Checking if "multinode-106482" exists ...
	I1123 09:44:20.442238  392318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-106482
	I1123 09:44:20.470297  392318 host.go:66] Checking if "multinode-106482" exists ...
	I1123 09:44:20.470605  392318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:44:20.470651  392318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-106482
	I1123 09:44:20.488865  392318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/multinode-106482/id_rsa Username:docker}
	I1123 09:44:20.598807  392318 ssh_runner.go:195] Run: systemctl --version
	I1123 09:44:20.605228  392318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:44:20.618362  392318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:44:20.687418  392318 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 09:44:20.677652523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:44:20.687969  392318 kubeconfig.go:125] found "multinode-106482" server: "https://192.168.67.2:8443"
	I1123 09:44:20.688003  392318 api_server.go:166] Checking apiserver status ...
	I1123 09:44:20.688046  392318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:44:20.699363  392318 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1225/cgroup
	I1123 09:44:20.709089  392318 api_server.go:182] apiserver freezer: "10:freezer:/docker/155febe3f95b8d4bfab156ec013dd4a911ccec46e1ef51017b79e5d8c2d1d580/crio/crio-5bcf27287c05b2afdacce9a6c01dd11319e33188b150d198f911e0cb378927eb"
	I1123 09:44:20.709167  392318 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/155febe3f95b8d4bfab156ec013dd4a911ccec46e1ef51017b79e5d8c2d1d580/crio/crio-5bcf27287c05b2afdacce9a6c01dd11319e33188b150d198f911e0cb378927eb/freezer.state
	I1123 09:44:20.717015  392318 api_server.go:204] freezer state: "THAWED"
	I1123 09:44:20.717045  392318 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1123 09:44:20.725748  392318 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1123 09:44:20.725783  392318 status.go:463] multinode-106482 apiserver status = Running (err=<nil>)
	I1123 09:44:20.725795  392318 status.go:176] multinode-106482 status: &{Name:multinode-106482 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:44:20.725816  392318 status.go:174] checking status of multinode-106482-m02 ...
	I1123 09:44:20.726128  392318 cli_runner.go:164] Run: docker container inspect multinode-106482-m02 --format={{.State.Status}}
	I1123 09:44:20.748381  392318 status.go:371] multinode-106482-m02 host status = "Running" (err=<nil>)
	I1123 09:44:20.748407  392318 host.go:66] Checking if "multinode-106482-m02" exists ...
	I1123 09:44:20.748713  392318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-106482-m02
	I1123 09:44:20.769587  392318 host.go:66] Checking if "multinode-106482-m02" exists ...
	I1123 09:44:20.769934  392318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:44:20.769980  392318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-106482-m02
	I1123 09:44:20.788184  392318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33284 SSHKeyPath:/home/jenkins/minikube-integration/21969-282998/.minikube/machines/multinode-106482-m02/id_rsa Username:docker}
	I1123 09:44:20.890652  392318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:44:20.903830  392318 status.go:176] multinode-106482-m02 status: &{Name:multinode-106482-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:44:20.903906  392318 status.go:174] checking status of multinode-106482-m03 ...
	I1123 09:44:20.904232  392318 cli_runner.go:164] Run: docker container inspect multinode-106482-m03 --format={{.State.Status}}
	I1123 09:44:20.921928  392318 status.go:371] multinode-106482-m03 host status = "Stopped" (err=<nil>)
	I1123 09:44:20.921950  392318 status.go:384] host is not running, skipping remaining checks
	I1123 09:44:20.921959  392318 status.go:176] multinode-106482-m03 status: &{Name:multinode-106482-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-106482 node start m03 -v=5 --alsologtostderr: (7.664338447s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-106482
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-106482
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-106482: (25.011097323s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-106482 --wait=true -v=5 --alsologtostderr
E1123 09:45:14.909028  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-106482 --wait=true -v=5 --alsologtostderr: (54.703275986s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-106482
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-106482 node delete m03: (4.982420164s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-106482 stop: (23.786123917s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-106482 status: exit status 7 (88.850183ms)

                                                
                                                
-- stdout --
	multinode-106482
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-106482-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-106482 status --alsologtostderr: exit status 7 (87.331487ms)

                                                
                                                
-- stdout --
	multinode-106482
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-106482-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:46:18.843910  400131 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:46:18.844131  400131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:46:18.844159  400131 out.go:374] Setting ErrFile to fd 2...
	I1123 09:46:18.844177  400131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:46:18.844446  400131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:46:18.844654  400131 out.go:368] Setting JSON to false
	I1123 09:46:18.844712  400131 mustload.go:66] Loading cluster: multinode-106482
	I1123 09:46:18.844800  400131 notify.go:221] Checking for updates...
	I1123 09:46:18.845159  400131 config.go:182] Loaded profile config "multinode-106482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:46:18.845192  400131 status.go:174] checking status of multinode-106482 ...
	I1123 09:46:18.846017  400131 cli_runner.go:164] Run: docker container inspect multinode-106482 --format={{.State.Status}}
	I1123 09:46:18.864263  400131 status.go:371] multinode-106482 host status = "Stopped" (err=<nil>)
	I1123 09:46:18.864285  400131 status.go:384] host is not running, skipping remaining checks
	I1123 09:46:18.864292  400131 status.go:176] multinode-106482 status: &{Name:multinode-106482 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 09:46:18.864328  400131 status.go:174] checking status of multinode-106482-m02 ...
	I1123 09:46:18.864628  400131 cli_runner.go:164] Run: docker container inspect multinode-106482-m02 --format={{.State.Status}}
	I1123 09:46:18.884555  400131 status.go:371] multinode-106482-m02 host status = "Stopped" (err=<nil>)
	I1123 09:46:18.884574  400131 status.go:384] host is not running, skipping remaining checks
	I1123 09:46:18.884593  400131 status.go:176] multinode-106482-m02 status: &{Name:multinode-106482-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-106482 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-106482 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.988828567s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-106482 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-106482
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-106482-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-106482-m02 --driver=docker  --container-runtime=crio: exit status 14 (90.158182ms)

                                                
                                                
-- stdout --
	* [multinode-106482-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-106482-m02' is duplicated with machine name 'multinode-106482-m02' in profile 'multinode-106482'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-106482-m03 --driver=docker  --container-runtime=crio
E1123 09:47:29.812438  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-106482-m03 --driver=docker  --container-runtime=crio: (34.57875435s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-106482
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-106482: exit status 80 (326.178771ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-106482 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-106482-m03 already exists in multinode-106482-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-106482-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-106482-m03: (2.082781686s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.13s)

                                                
                                    
x
+
TestPreload (129.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-508948 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-508948 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.861081405s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-508948 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-508948 image pull gcr.io/k8s-minikube/busybox: (2.236747651s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-508948
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-508948: (6.157035765s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-508948 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-508948 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (55.653090668s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-508948 image list
helpers_test.go:175: Cleaning up "test-preload-508948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-508948
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-508948: (2.478126621s)
--- PASS: TestPreload (129.63s)

                                                
                                    
x
+
TestScheduledStopUnix (108.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-090349 --memory=3072 --driver=docker  --container-runtime=crio
E1123 09:50:14.909539  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-090349 --memory=3072 --driver=docker  --container-runtime=crio: (31.572955342s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-090349 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 09:50:30.405401  414131 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:50:30.405625  414131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:50:30.405653  414131 out.go:374] Setting ErrFile to fd 2...
	I1123 09:50:30.405672  414131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:50:30.405942  414131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:50:30.406227  414131 out.go:368] Setting JSON to false
	I1123 09:50:30.406380  414131 mustload.go:66] Loading cluster: scheduled-stop-090349
	I1123 09:50:30.406773  414131 config.go:182] Loaded profile config "scheduled-stop-090349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:50:30.406868  414131 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/config.json ...
	I1123 09:50:30.407108  414131 mustload.go:66] Loading cluster: scheduled-stop-090349
	I1123 09:50:30.407272  414131 config.go:182] Loaded profile config "scheduled-stop-090349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-090349 -n scheduled-stop-090349
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-090349 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 09:50:30.846428  414220 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:50:30.846607  414220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:50:30.846639  414220 out.go:374] Setting ErrFile to fd 2...
	I1123 09:50:30.846665  414220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:50:30.846945  414220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:50:30.847243  414220 out.go:368] Setting JSON to false
	I1123 09:50:30.848119  414220 daemonize_unix.go:73] killing process 414154 as it is an old scheduled stop
	I1123 09:50:30.851753  414220 mustload.go:66] Loading cluster: scheduled-stop-090349
	I1123 09:50:30.852254  414220 config.go:182] Loaded profile config "scheduled-stop-090349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:50:30.852372  414220 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/config.json ...
	I1123 09:50:30.852579  414220 mustload.go:66] Loading cluster: scheduled-stop-090349
	I1123 09:50:30.852726  414220 config.go:182] Loaded profile config "scheduled-stop-090349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 09:50:30.858183  284904 retry.go:31] will retry after 97.846µs: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.859312  284904 retry.go:31] will retry after 177.418µs: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.860484  284904 retry.go:31] will retry after 169.247µs: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.861600  284904 retry.go:31] will retry after 504.545µs: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.862739  284904 retry.go:31] will retry after 588.387µs: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.863898  284904 retry.go:31] will retry after 1.081907ms: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.865057  284904 retry.go:31] will retry after 1.379324ms: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.867291  284904 retry.go:31] will retry after 1.758871ms: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.869577  284904 retry.go:31] will retry after 2.141101ms: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.872806  284904 retry.go:31] will retry after 4.480962ms: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.879558  284904 retry.go:31] will retry after 2.913165ms: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.882834  284904 retry.go:31] will retry after 11.50322ms: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.895065  284904 retry.go:31] will retry after 11.410386ms: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.907317  284904 retry.go:31] will retry after 23.67097ms: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.931547  284904 retry.go:31] will retry after 20.363034ms: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
I1123 09:50:30.952072  284904 retry.go:31] will retry after 56.663283ms: open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-090349 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-090349 -n scheduled-stop-090349
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-090349
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-090349 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 09:50:56.818472  414589 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:50:56.818659  414589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:50:56.818686  414589 out.go:374] Setting ErrFile to fd 2...
	I1123 09:50:56.818705  414589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:50:56.818987  414589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:50:56.819264  414589 out.go:368] Setting JSON to false
	I1123 09:50:56.819402  414589 mustload.go:66] Loading cluster: scheduled-stop-090349
	I1123 09:50:56.819850  414589 config.go:182] Loaded profile config "scheduled-stop-090349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:50:56.819992  414589 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/scheduled-stop-090349/config.json ...
	I1123 09:50:56.820268  414589 mustload.go:66] Loading cluster: scheduled-stop-090349
	I1123 09:50:56.820452  414589 config.go:182] Loaded profile config "scheduled-stop-090349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-090349
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-090349: exit status 7 (72.989083ms)

                                                
                                                
-- stdout --
	scheduled-stop-090349
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-090349 -n scheduled-stop-090349
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-090349 -n scheduled-stop-090349: exit status 7 (68.481185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-090349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-090349
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-090349: (4.957571095s)
--- PASS: TestScheduledStopUnix (108.15s)

                                                
                                    
x
+
TestInsufficientStorage (13.68s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-268811 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-268811 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.863532905s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"43552ee8-6cf9-4ec5-9618-c26240bb286f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-268811] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"564229cf-3ff9-4411-8ad3-ed7eaf45fe31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21969"}}
	{"specversion":"1.0","id":"4e710099-4964-4cce-bf9b-8ce67bf10fc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"47b09f8d-97a3-4e39-babf-56de2b449c5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig"}}
	{"specversion":"1.0","id":"2d4a6a95-d261-4c38-8bf2-3d5227d84e7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube"}}
	{"specversion":"1.0","id":"f3a18bdb-5b73-416a-8f54-6dcb843ade84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"65b271bd-4943-4a8e-a345-c6acaf006c83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"201a882c-feb1-4834-8f5d-173202ce4b94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"898342d2-5f38-4e43-8db3-79c7d0100b96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8c2857ff-b0f2-4412-bb16-cf5e7f96c2fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"243193be-5c60-4e9a-ac6a-0dbe6920576a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e0cfaf95-b73e-47af-91e2-bb67e881d084","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-268811\" primary control-plane node in \"insufficient-storage-268811\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4006ccc9-5dc5-4062-8c63-699ed1182095","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2918a5cb-411b-4ddb-9491-3c3e39b59626","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d0daa625-2689-45ba-90ac-bffa322b66b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-268811 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-268811 --output=json --layout=cluster: exit status 7 (307.182153ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-268811","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-268811","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 09:51:58.081091  416307 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-268811" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-268811 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-268811 --output=json --layout=cluster: exit status 7 (305.501365ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-268811","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-268811","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 09:51:58.387199  416376 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-268811" does not appear in /home/jenkins/minikube-integration/21969-282998/kubeconfig
	E1123 09:51:58.397099  416376 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/insufficient-storage-268811/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-268811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-268811
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-268811: (2.206435501s)
--- PASS: TestInsufficientStorage (13.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (63.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.965608071 start -p running-upgrade-181411 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1123 10:00:14.909793  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.965608071 start -p running-upgrade-181411 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.376059495s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-181411 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-181411 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.290319911s)
helpers_test.go:175: Cleaning up "running-upgrade-181411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-181411
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-181411: (2.335965056s)
--- PASS: TestRunningBinaryUpgrade (63.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (352.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-444006 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1123 09:55:32.883782  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-444006 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.764826619s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-444006
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-444006: (1.341173776s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-444006 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-444006 status --format={{.Host}}: exit status 7 (69.61776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-444006 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1123 09:57:29.810421  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-444006 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m37.32805174s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-444006 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-444006 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-444006 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (125.258726ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-444006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-444006
	    minikube start -p kubernetes-upgrade-444006 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4440062 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-444006 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-444006 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-444006 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.156365694s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-444006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-444006
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-444006: (2.51301459s)
--- PASS: TestKubernetesUpgrade (352.45s)

                                                
                                    
x
+
TestMissingContainerUpgrade (104.87s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3619925319 start -p missing-upgrade-461483 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3619925319 start -p missing-upgrade-461483 --memory=3072 --driver=docker  --container-runtime=crio: (56.794775433s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-461483
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-461483
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-461483 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-461483 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.290842625s)
helpers_test.go:175: Cleaning up "missing-upgrade-461483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-461483
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-461483: (2.155788329s)
--- PASS: TestMissingContainerUpgrade (104.87s)

                                                
                                    
x
+
TestPause/serial/Start (90.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-902289 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-902289 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m30.878807209s)
--- PASS: TestPause/serial/Start (90.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454477 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-454477 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (119.279842ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-454477] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454477 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1123 09:52:29.809601  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-454477 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.662749475s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-454477 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454477 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-454477 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.585040135s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-454477 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-454477 status -o json: exit status 2 (316.448052ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-454477","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-454477
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-454477: (2.058770342s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454477 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-454477 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.013181272s)
--- PASS: TestNoKubernetes/serial/Start (8.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21969-282998/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-454477 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-454477 "sudo systemctl is-active --quiet service kubelet": exit status 1 (300.050575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-454477
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-454477: (1.321090219s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454477 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-454477 --driver=docker  --container-runtime=crio: (6.834394412s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-454477 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-454477 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.267238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-507563 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-507563 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (201.42508ms)

                                                
                                                
-- stdout --
	* [false-507563] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:53:16.183834  425927 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:53:16.183948  425927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:53:16.183959  425927 out.go:374] Setting ErrFile to fd 2...
	I1123 09:53:16.183965  425927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:53:16.184263  425927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-282998/.minikube/bin
	I1123 09:53:16.184656  425927 out.go:368] Setting JSON to false
	I1123 09:53:16.185643  425927 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9345,"bootTime":1763882251,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1123 09:53:16.185716  425927 start.go:143] virtualization:  
	I1123 09:53:16.189301  425927 out.go:179] * [false-507563] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:53:16.192323  425927 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:53:16.192398  425927 notify.go:221] Checking for updates...
	I1123 09:53:16.198305  425927 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:53:16.201210  425927 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-282998/kubeconfig
	I1123 09:53:16.204008  425927 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-282998/.minikube
	I1123 09:53:16.206970  425927 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:53:16.209821  425927 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:53:16.213361  425927 config.go:182] Loaded profile config "pause-902289": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:53:16.213492  425927 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:53:16.242312  425927 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:53:16.242437  425927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:53:16.307032  425927 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 09:53:16.297966682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:53:16.307139  425927 docker.go:319] overlay module found
	I1123 09:53:16.310330  425927 out.go:179] * Using the docker driver based on user configuration
	I1123 09:53:16.313125  425927 start.go:309] selected driver: docker
	I1123 09:53:16.313144  425927 start.go:927] validating driver "docker" against <nil>
	I1123 09:53:16.313157  425927 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:53:16.316754  425927 out.go:203] 
	W1123 09:53:16.319564  425927 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1123 09:53:16.322586  425927 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-507563 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-507563

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-507563

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-507563

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-507563

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-507563

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-507563

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-507563

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-507563

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-507563

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-507563

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-507563

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-507563" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-507563" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 09:52:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-902289
contexts:
- context:
cluster: pause-902289
extensions:
- extension:
last-update: Sun, 23 Nov 2025 09:52:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-902289
name: pause-902289
current-context: pause-902289
kind: Config
preferences: {}
users:
- name: pause-902289
user:
client-certificate: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/client.crt
client-key: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-507563

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-507563"

                                                
                                                
----------------------- debugLogs end: false-507563 [took: 3.354434861s] --------------------------------
helpers_test.go:175: Cleaning up "false-507563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-507563
--- PASS: TestNetworkPlugins/group/false (3.72s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-902289 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-902289 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.486945411s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (66.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.871341205 start -p stopped-upgrade-835081 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.871341205 start -p stopped-upgrade-835081 --memory=3072 --vm-driver=docker  --container-runtime=crio: (44.051228219s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.871341205 -p stopped-upgrade-835081 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.871341205 -p stopped-upgrade-835081 stop: (1.285923859s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-835081 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-835081 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.494287378s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (66.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m28.487936958s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-835081
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-835081: (1.225836623s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1123 10:02:29.809460  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m21.197652702s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-507563 "pgrep -a kubelet"
I1123 10:02:48.039275  284904 config.go:182] Loaded profile config "auto-507563": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-507563 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-65f8c" [6feaa2f9-fd78-4b82-a8fc-74e5f348c4d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-65f8c" [6feaa2f9-fd78-4b82-a8fc-74e5f348c4d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004378474s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-507563 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (58.861683959s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-ws7nd" [c6b42a8e-23f5-4a24-b884-78688e0a14a0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.012072672s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-507563 "pgrep -a kubelet"
I1123 10:03:41.492590  284904 config.go:182] Loaded profile config "kindnet-507563": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-507563 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-glx6c" [c0013809-2618-4bb9-9faf-84dfc1228428] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-glx6c" [c0013809-2618-4bb9-9faf-84dfc1228428] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.003921209s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-507563 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m22.681439591s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-x7kjn" [de722941-6a67-406a-9875-85b43791ef6e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003249058s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-507563 "pgrep -a kubelet"
I1123 10:04:26.183346  284904 config.go:182] Loaded profile config "flannel-507563": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-507563 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-57d7r" [b8e87994-1c95-40f8-b0ba-43f8a5146ca3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-57d7r" [b8e87994-1c95-40f8-b0ba-43f8a5146ca3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003647369s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-507563 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1123 10:05:14.909188  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m20.9818897s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-507563 "pgrep -a kubelet"
I1123 10:05:40.325849  284904 config.go:182] Loaded profile config "enable-default-cni-507563": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-507563 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q7m6v" [5e55cf90-b429-45e2-b2a5-098d3282e056] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-q7m6v" [5e55cf90-b429-45e2-b2a5-098d3282e056] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003650465s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-507563 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (57.58107889s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-507563 "pgrep -a kubelet"
I1123 10:06:25.261235  284904 config.go:182] Loaded profile config "bridge-507563": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-507563 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lklg2" [175696e5-a65a-4ae3-8a38-0a15bdc201c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lklg2" [175696e5-a65a-4ae3-8a38-0a15bdc201c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003445382s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-507563 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-507563 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m17.873604641s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-507563 "pgrep -a kubelet"
I1123 10:07:09.869445  284904 config.go:182] Loaded profile config "custom-flannel-507563": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-507563 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8htl4" [e044ace2-abe2-4508-b5e2-b30b6e5cbb3b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8htl4" [e044ace2-abe2-4508-b5e2-b30b6e5cbb3b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004291478s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-507563 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (66.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-706028 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1123 10:07:53.548081  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/auto-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:07:58.669805  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/auto-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:08:08.911209  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/auto-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-706028 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m6.191349845s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (66.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-fdzmz" [560ee366-4424-4013-8329-2393134a8f03] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007859508s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-507563 "pgrep -a kubelet"
I1123 10:08:25.222870  284904 config.go:182] Loaded profile config "calico-507563": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-507563 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4fqsc" [74107cef-ec3e-46da-bc32-b447014bb8bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1123 10:08:29.392752  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/auto-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-4fqsc" [74107cef-ec3e-46da-bc32-b447014bb8bf] Running
E1123 10:08:35.107881  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:08:35.114267  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:08:35.125616  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:08:35.147092  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:08:35.188515  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:08:35.269894  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:08:35.431703  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:08:35.753532  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:08:36.394845  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004374575s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-507563 exec deployment/netcat -- nslookup kubernetes.default
E1123 10:08:37.676413  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-507563 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-706028 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3d8762ee-c527-4c0e-9d25-4aa79457ae6b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3d8762ee-c527-4c0e-9d25-4aa79457ae6b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004658333s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-706028 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m13.875105408s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-706028 --alsologtostderr -v=3
E1123 10:09:16.082861  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:19.895825  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:19.902493  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:19.913843  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:19.935189  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:19.976552  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:20.057914  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:20.219545  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:20.541771  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:21.183947  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:22.465278  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:25.027175  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-706028 --alsologtostderr -v=3: (12.90546383s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706028 -n old-k8s-version-706028
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706028 -n old-k8s-version-706028: exit status 7 (333.56211ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-706028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (59.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-706028 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1123 10:09:30.148908  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:40.390229  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:09:57.044667  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:10:00.871992  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:10:14.909450  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-706028 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (58.917204557s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706028 -n old-k8s-version-706028
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (59.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-020224 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6365a14a-d665-4e48-8060-59665b080967] Pending
helpers_test.go:352: "busybox" [6365a14a-d665-4e48-8060-59665b080967] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6365a14a-d665-4e48-8060-59665b080967] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004198966s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-020224 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-w7rtb" [f7af7097-20f5-4919-86c3-74411c41cfb0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003408761s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-020224 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-020224 --alsologtostderr -v=3: (12.124419542s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-w7rtb" [f7af7097-20f5-4919-86c3-74411c41cfb0] Running
E1123 10:10:32.276137  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/auto-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003429272s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-706028 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-706028 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020224 -n no-preload-020224
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020224 -n no-preload-020224: exit status 7 (105.014382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-020224 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (60.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-020224 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m0.535195798s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-020224 -n no-preload-020224
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (60.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 10:10:50.870199  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:01.111982  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:18.966913  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:21.594342  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:25.597548  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:25.604364  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:25.615721  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:25.637088  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:25.678455  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:25.760157  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:25.921712  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:26.243578  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:26.885455  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:28.167138  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:30.729576  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:35.851505  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:11:37.979322  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/addons-984173/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m23.734898781s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-n54fr" [c3920bf6-1c4d-4052-b857-79560bb6954b] Running
E1123 10:11:46.092947  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003047992s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-n54fr" [c3920bf6-1c4d-4052-b857-79560bb6954b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003465024s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-020224 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-020224 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 10:12:03.756094  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:12:06.574289  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:12:10.120641  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:12:10.127081  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:12:10.138455  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:12:10.159844  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:12:10.201214  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:12:10.282593  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:12:10.444089  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m21.032687247s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-566990 create -f testdata/busybox.yaml
E1123 10:12:10.766161  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f36b53a6-0047-4dbc-9603-6a1965a89bb6] Pending
E1123 10:12:11.408178  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [f36b53a6-0047-4dbc-9603-6a1965a89bb6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1123 10:12:12.689573  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:12:12.885954  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [f36b53a6-0047-4dbc-9603-6a1965a89bb6] Running
E1123 10:12:15.250948  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004163547s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-566990 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-566990 --alsologtostderr -v=3
E1123 10:12:29.809220  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/functional-605613/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:12:30.613577  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-566990 --alsologtostderr -v=3: (12.231515297s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-566990 -n embed-certs-566990
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-566990 -n embed-certs-566990: exit status 7 (73.544604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-566990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 10:12:47.536013  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:12:48.417852  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/auto-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:12:51.095505  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:16.117584  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/auto-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:18.819445  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:18.825820  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:18.837204  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:18.858567  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:18.899915  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:18.981292  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:19.142808  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:19.464572  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:20.106632  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:21.388427  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-566990 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.041716825s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-566990 -n embed-certs-566990
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-330197 create -f testdata/busybox.yaml
E1123 10:13:23.950715  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4387e28f-77a9-4288-b0ad-d58ae149c2b9] Pending
E1123 10:13:24.477652  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/enable-default-cni-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [4387e28f-77a9-4288-b0ad-d58ae149c2b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4387e28f-77a9-4288-b0ad-d58ae149c2b9] Running
E1123 10:13:29.072201  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003852326s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-330197 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hmrpb" [a7bf3071-fcde-4095-a28f-fb26acf0096e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004227543s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hmrpb" [a7bf3071-fcde-4095-a28f-fb26acf0096e] Running
E1123 10:13:32.056838  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/custom-flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003361975s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-566990 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-330197 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-330197 --alsologtostderr -v=3: (12.192223796s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-566990 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-499584 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-499584 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.312852264s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-330197 -n default-k8s-diff-port-330197
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-330197 -n default-k8s-diff-port-330197: exit status 7 (107.384529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-330197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 10:13:57.665601  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:57.674993  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:57.686302  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:57.707618  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:57.748967  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:57.833581  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:57.995206  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:58.316473  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:58.957964  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:13:59.796360  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:14:00.239867  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:14:02.801578  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:14:02.809014  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/kindnet-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:14:07.923194  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:14:09.459254  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/bridge-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:14:18.165086  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:14:19.895202  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-330197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (56.766672749s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-330197 -n default-k8s-diff-port-330197
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-499584 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-499584 --alsologtostderr -v=3: (2.174344593s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499584 -n newest-cni-499584
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499584 -n newest-cni-499584: exit status 7 (79.943655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-499584 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-499584 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 10:14:38.646963  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/old-k8s-version-706028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:14:40.758210  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/calico-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-499584 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (14.780404569s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499584 -n newest-cni-499584
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8wqtw" [eb24f5e7-c61d-442a-91a6-e5d5c11eb288] Running
E1123 10:14:47.597911  284904 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/flannel-507563/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002966457s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-499584 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8wqtw" [eb24f5e7-c61d-442a-91a6-e5d5c11eb288] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003841036s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-330197 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-330197 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-864519 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-864519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-864519
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-507563 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-507563

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-507563

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-507563

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-507563

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-507563

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-507563

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-507563

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-507563

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-507563

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-507563

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-507563

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-507563" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-507563" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 09:52:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-902289
contexts:
- context:
cluster: pause-902289
extensions:
- extension:
last-update: Sun, 23 Nov 2025 09:52:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-902289
name: pause-902289
current-context: pause-902289
kind: Config
preferences: {}
users:
- name: pause-902289
user:
client-certificate: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/client.crt
client-key: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-507563

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-507563"

                                                
                                                
----------------------- debugLogs end: kubenet-507563 [took: 3.394359148s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-507563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-507563
--- SKIP: TestNetworkPlugins/group/kubenet (3.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-507563 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-507563" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-282998/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 09:52:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-902289
contexts:
- context:
cluster: pause-902289
extensions:
- extension:
last-update: Sun, 23 Nov 2025 09:52:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-902289
name: pause-902289
current-context: pause-902289
kind: Config
preferences: {}
users:
- name: pause-902289
user:
client-certificate: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/client.crt
client-key: /home/jenkins/minikube-integration/21969-282998/.minikube/profiles/pause-902289/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-507563

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-507563" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-507563"

                                                
                                                
----------------------- debugLogs end: cilium-507563 [took: 3.818580923s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-507563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-507563
--- SKIP: TestNetworkPlugins/group/cilium (3.99s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-097888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-097888
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard